Live-action image measuring method and device, electronic equipment and storage medium

文档序号:798755 发布日期:2021-04-13 浏览:29次 中文

阅读说明:本技术 实景影像测量方法、装置、电子设备及存储介质 (Live-action image measuring method and device, electronic equipment and storage medium ) 是由 蔡越 刘继恩 王翔 于 2020-12-23 设计创作,主要内容包括:本申请涉及道路交通技术领域,提供了一种实景影像测量方法、装置、电子设备及存储介质,其中方法包括:确定待测量的目标物体上的多个特征点;基于任一特征点分别在两张实景影像中的像素坐标,以及像素坐标系与大地坐标系之间的映射关系,得到所述任一特征点在大地坐标系中的空间坐标;基于所述多个特征点的空间坐标,确定所述目标物体的测量结果;其中,所述两张实景影像是相机在预设间隔的两个空间位置对所述目标物体进行采集得到的。本申请提供的方法、装置、电子设备及存储介质,实现了公路路产数据的信息化,提高了公路路产信息采集和更新的准确度。(The application relates to the technical field of road traffic, and provides a live-action image measuring method, a live-action image measuring device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a plurality of characteristic points on a target object to be measured; based on the pixel coordinates of any feature point in the two live-action images and the mapping relation between the pixel coordinate system and the geodetic coordinate system, obtaining the space coordinate of any feature point in the geodetic coordinate system; determining a measurement of the target object based on the spatial coordinates of the plurality of feature points; the two live-action images are obtained by acquiring the target object at two spatial positions with a preset interval by the camera. The method, the device, the electronic equipment and the storage medium realize informatization of road property data and improve accuracy of road property information acquisition and updating.)

1. A live-action image measuring method is characterized by comprising the following steps:

determining a plurality of characteristic points on a target object to be measured;

based on the pixel coordinates of any feature point in the two live-action images and the mapping relation between the pixel coordinate system and the geodetic coordinate system, obtaining the space coordinate of any feature point in the geodetic coordinate system;

determining a measurement of the target object based on the spatial coordinates of the plurality of feature points;

the two live-action images are obtained by acquiring the target object at two spatial positions with a preset interval by the camera.

2. The live-action image measuring method according to claim 1, wherein the obtaining of the spatial coordinates of any feature point in the geodetic coordinate system based on the pixel coordinates of the feature point in the two live-action images and the mapping relationship between the pixel coordinate system and the geodetic coordinate system comprises:

obtaining a positioning ray of any characteristic point, which is mapped to a space coordinate in any live-action image, based on a pixel coordinate of any characteristic point in any live-action image and a mapping relation between a pixel coordinate system and a geodetic coordinate system;

and obtaining the space coordinate of any characteristic point in a geodetic coordinate system based on the intersection point of the positioning rays of the pixel coordinates of any characteristic point in the two live-action images mapped to the space coordinate.

3. A live-action image measuring method according to claim 1 or 2, wherein the method for determining the mapping relationship between the pixel coordinate system and the geodetic coordinate system comprises:

determining a mapping relation between a pixel coordinate system and an image coordinate system based on the projection of the optical center of the camera on an imaging plane and the physical size of each pixel point in a live-action image acquired by the camera;

determining a mapping relationship between the image coordinate system and a camera coordinate system based on a focal length of the camera;

determining a mapping relationship between the camera coordinate system and the geodetic coordinate system based on the external reference matrix of the camera;

determining a mapping relationship between the pixel coordinate system and the geodetic coordinate system based on a mapping relationship between the pixel coordinate system and the image coordinate system, a mapping relationship between the image coordinate system and the camera coordinate system, and a mapping relationship between the camera coordinate system and the geodetic coordinate system.

4. A live-action image measuring method according to claim 3, wherein the camera's internal reference matrix is determined based on a mapping relationship between the pixel coordinate system and the image coordinate system and a mapping relationship between the image coordinate system and the camera coordinate system.

5. The live-action image measuring method according to claim 4, wherein the method for determining the internal reference matrix of the camera comprises:

and calibrating the camera according to at least one of a DLT method, an RAC method, a plane calibration method, a plane circle calibration method and a parallel circle calibration method, and determining the internal reference matrix of the camera.

6. The live-action image measuring method according to claim 5, wherein the method for determining the external parameter matrix of the camera comprises:

determining an internal reference matrix of the camera;

determining an external parameter matrix of the camera based on the pixel coordinates and the spatial coordinates of the calibration points and the internal parameter matrix of the camera.

7. The live-action image measuring method according to claim 6, wherein the three-dimensional attitude angle of the camera is determined based on an external parameter matrix of the camera.

8. A live-action image measuring device, comprising:

a feature point determination unit for determining a plurality of feature points on a target object to be measured;

the coordinate determination unit is used for obtaining the space coordinate of any characteristic point in the geodetic coordinate system based on the pixel coordinates of the characteristic point in the two live-action images and the mapping relation between the pixel coordinate system and the geodetic coordinate system;

a spatial measurement unit configured to determine a measurement result of the target object based on spatial coordinates of the plurality of feature points;

the two live-action images are obtained by acquiring the target object at two spatial positions with a preset interval by the camera.

9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the live-action image measuring method according to any one of claims 1 to 7 when executing the computer program.

10. A non-transitory computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the live-action image measuring method according to any one of claims 1 to 7.

Technical Field

The present disclosure relates to the field of road traffic technologies, and in particular, to a method and an apparatus for measuring live-action images, an electronic device, and a storage medium.

Background

With the increase of the total quantity of road production facilities, the transportation industry, especially the road asset management industry, has stronger and stronger requirements on high quality and strong timeliness of asset data in order to improve the timeliness of road right supervision and record management and the standardization level of archive management.

How to realize the informatization of road property data, further enrich the basic data of road assets, improve the data quality, realize the deep integration of road property road right business management and information technology, realize the comprehensive and accurate general investigation and updating of road properties and the fine management of road right business, and is the problem to be solved urgently for road asset management.

Disclosure of Invention

The application provides a live-action image measuring method, a live-action image measuring device, electronic equipment and a storage medium, which are used for solving the problem of how to realize informatization of highway road production data.

The application provides a live-action image measuring method, which comprises the following steps:

determining a plurality of characteristic points on a target object to be measured;

based on the pixel coordinates of any feature point in the two live-action images and the mapping relation between the pixel coordinate system and the geodetic coordinate system, obtaining the space coordinate of any feature point in the geodetic coordinate system;

determining a measurement of the target object based on the spatial coordinates of the plurality of feature points;

the two live-action images are obtained by acquiring the target object at two spatial positions with a preset interval by the camera.

According to the live-action image measuring method provided by the application, the step of obtaining the space coordinate of any feature point in the geodetic coordinate system based on the pixel coordinates of any feature point in the two live-action images and the mapping relation between the pixel coordinate system and the geodetic coordinate system comprises the following steps:

obtaining a positioning ray of any characteristic point, which is mapped to a space coordinate in any live-action image, based on a pixel coordinate of any characteristic point in any live-action image and a mapping relation between a pixel coordinate system and a geodetic coordinate system;

and obtaining the space coordinate of any characteristic point in a geodetic coordinate system based on the intersection point of the positioning rays of the pixel coordinates of any characteristic point in the two live-action images mapped to the space coordinate.

According to the live-action image measuring method provided by the application, the method for determining the mapping relation between the pixel coordinate system and the geodetic coordinate system comprises the following steps:

determining a mapping relation between a pixel coordinate system and an image coordinate system based on the projection of the optical center of the camera on an imaging plane and the physical size of each pixel point in a live-action image acquired by the camera;

determining a mapping relationship between the image coordinate system and a camera coordinate system based on a focal length of the camera;

determining a mapping relationship between the camera coordinate system and the geodetic coordinate system based on the external reference matrix of the camera;

determining a mapping relationship between the pixel coordinate system and the geodetic coordinate system based on a mapping relationship between the pixel coordinate system and the image coordinate system, a mapping relationship between the image coordinate system and the camera coordinate system, and a mapping relationship between the camera coordinate system and the geodetic coordinate system.

According to the live-action image measuring method provided by the application, the internal reference matrix of the camera is determined based on the mapping relation between the pixel coordinate system and the image coordinate system and the mapping relation between the image coordinate system and the camera coordinate system.

According to the live-action image measuring method provided by the application, the method for determining the internal reference matrix of the camera comprises the following steps:

and calibrating the camera according to at least one of a DLT method, an RAC method, a plane calibration method, a plane circle calibration method and a parallel circle calibration method, and determining the internal reference matrix of the camera.

According to the live-action image measuring method provided by the application, the method for determining the external parameter matrix of the camera comprises the following steps:

determining an internal reference matrix of the camera;

determining an external parameter matrix of the camera based on the pixel coordinates and the spatial coordinates of the calibration points and the internal parameter matrix of the camera.

According to the live-action image measuring method provided by the application, the three-dimensional attitude angle of the camera is determined based on the external parameter matrix of the camera.

The application also provides a live-action image measuring device, includes:

a feature point determination unit for determining a plurality of feature points on a target object to be measured;

the coordinate determination unit is used for obtaining the space coordinate of any characteristic point in the geodetic coordinate system based on the pixel coordinates of the characteristic point in the two live-action images and the mapping relation between the pixel coordinate system and the geodetic coordinate system;

a spatial measurement unit configured to determine a measurement result of the target object based on spatial coordinates of the plurality of feature points;

the two live-action images are obtained by acquiring the target object at two spatial positions with a preset interval by the camera.

According to the live-action image measuring method, the live-action image measuring device, the electronic equipment and the storage medium, the target object is acquired through the camera to obtain the live-action image, the pixel coordinates of each characteristic point of the target object in the live-action image are converted to obtain the space coordinates of each characteristic point in the geodetic coordinate system according to the mapping relation between the pixel coordinate system and the geodetic coordinate system, and then the target object is subjected to space parameter calculation to obtain the measuring result of the target object.

Drawings

In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a schematic flow chart of a live-action image measuring method provided in the present application;

FIG. 2 is a schematic diagram of a mapping relationship between a pixel coordinate system and an image coordinate system provided in the present application;

FIG. 3 is a schematic diagram of a mapping relationship between an image coordinate system and a camera coordinate system provided in the present application;

FIG. 4 is a schematic diagram of a mapping relationship between a camera coordinate system and a geodetic coordinate system provided by the present application;

fig. 5 is a schematic structural diagram of a live-action image measuring device provided in the present application;

fig. 6 is a schematic structural diagram of an electronic device provided in the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Fig. 1 is a schematic flow chart of a live-action image measuring method provided in the present application, and as shown in fig. 1, the method includes:

in step 110, a plurality of feature points on the target object to be measured are determined.

Specifically, the target object in the embodiment of the present application is a road, and a building such as a tunnel and a bridge on the road. The feature points are points characterizing the shape profile of the target object, for example, profile points on the tunnel cross section.

And step 120, obtaining the space coordinates of the feature point in the geodetic coordinate system based on the pixel coordinates of any feature point in the two live-action images and the mapping relation between the pixel coordinate system and the geodetic coordinate system. The two live-action images are obtained by acquiring a target object at two spatial positions with a preset interval by a camera.

Specifically, the live-action image is image data obtained by shooting a real scene, and may be a live-action photograph or the like. According to the embodiment of the application, the target object is subjected to space measurement through the live-action image obtained by shooting through the camera. The camera may be a monocular industrial camera. The camera can be installed on a vehicle, and data collection and space measurement are carried out on the road and buildings on the road through movement of the vehicle on the road.

The preset interval is the distance between the spatial positions of the two live-action images collected by the camera in sequence. In order to reduce the measurement error, the mode of acquiring the live-action image by the camera can be set by setting the size of the preset interval. For example, the preset interval may be set to 6 meters before the acquisition work. At the moment, the height change of the road surface is small, and the altitude change of the front and back acquisition positions of the camera can be ignored.

The pixel coordinate system is a two-dimensional plane coordinate system where the pixel points in the live-action image are located. The origin of the pixel coordinate system may be a pixel point at the upper left corner of the live-action image, and the unit is a pixel (pixel).

The geodetic coordinate system is a three-dimensional space coordinate system established by taking the earth center as a reference point, and the unit is meter (m).

According to the mapping relation between the pixel coordinate system and the geodetic coordinate system, the points of the pixel coordinate system can be mapped into the geodetic coordinate system, that is, the pixel coordinates of any feature point converted into the live-action image are converted into the space coordinates of the feature point in the geodetic coordinate system.

And step 130, determining the measurement result of the target object based on the space coordinates of the plurality of characteristic points.

Specifically, the spatial parameter calculation is performed according to the spatial coordinates of the plurality of feature points, and the measurement result of the target object can be calculated. The measurement result is the geometric parameters of the length, the area, the angle, the gradient, the vertical distance and the like of the target object.

For example, if the spatial coordinates of the feature point X1 and the feature point X2 on the target object have been obtained, the spatial distance L between the feature point X1 and the feature point X2 may be calculatedX1,X2Is formulated as:

in the formula, the spatial coordinates of the feature point X1 are (X1, y1, z1), and the spatial coordinates of the feature point X2 are (X2, y2, z 2). Here, the distance LX1,X2Is the linear distance between two points in three-dimensional space, not the perpendicular distance between two points.

According to the live-action image measuring method provided by the embodiment of the application, the target object is acquired through the camera to obtain the live-action image, the pixel coordinates of each characteristic point of the target object in the live-action image are converted to obtain the space coordinates of each characteristic point in the geodetic coordinate system according to the mapping relation between the pixel coordinate system and the geodetic coordinate system, and further the space parameter calculation of the target object is realized to obtain the measuring result of the target object.

Based on the above embodiment, step 120 includes:

obtaining a positioning ray of the pixel coordinate of any feature point in any live-action image mapped to a space coordinate based on the pixel coordinate of the feature point in any live-action image and the mapping relation between a pixel coordinate system and a geodetic coordinate system;

and obtaining the space coordinates of the feature point in the geodetic coordinate system based on the intersection point of the positioning rays of the feature point which are mapped to the space coordinates in the pixel coordinates of the two live-action images.

Specifically, the pixel coordinates of any feature point in any live-action image are converted according to the mapping relationship between the pixel coordinate system and the geodetic coordinate system. Since mapping two-dimensional coordinates to three-dimensional coordinates lacks one-dimensional information, such as depth information, mapping two-dimensional points to a three-dimensional coordinate system results in a positioning ray. All points on the locating ray are likely to be the spatial coordinates of the feature point.

Therefore, by solving two positioning rays of which the pixel coordinates are mapped to the geodetic coordinate system on the two live-action images of the feature point, the intersection point of the two positioning rays is the coordinate point of the feature point in the geodetic coordinate system, and the spatial coordinate of the feature point in the geodetic coordinate system is obtained.

The problem of the coordinates of the intersection of two rays in the three-dimensional space can be solved by a mathematical method.

Assume that the positioning ray obtained by the first live view image is L1. L1 can obtain a parametric representation equation according to the point P1(x1, y1, z1) and the point P2(x2, y2, z2) on the positioning ray, and the positioning ray obtained through the second live view image is L2. L2 may derive a parametric representation equation from points P3(x3, y3, z3) and P4(x4, y4, z4) on the localization ray. The intersection of L1 and L2 is (x, y, z).

Let xij, yij, zij denote the direction vectors of the positioning ray in the X-axis, Y-axis and Z-axis, respectively. The direction vector may be determined from the coordinates of known points, e.g., xij-xi-xj. t and s are parameters respectively, and can obtain:

the equation of L1 is x1+ x12 × t, y1+ y12 × t, and z1+ z12 × t;

the equation of L2 is x3+ x34 × s, y3+ y34 × s, and z3+ z34 × s;

substituting the equation of L1 into the equation of L2 can result in:

s=(x13+x12t)/x34

and substituting the equation to obtain an expression of t:

t=(y13*x34-y34*x13)/(y34*x12-x34*y12)

the intersection point is obtained by substituting into the equation of L1, for example:

x=x1+x12*((y13*x34-y34*x13)/(y34*x12-x34*y12))

y and z are obtained in the same way.

By adopting the method, the intersection point of the positioning rays of the pixel coordinates of the characteristic point in the two live-action images mapped to the space coordinate is solved, and the space coordinate of the characteristic point in the geodetic coordinate system can be obtained.

Based on any one of the above embodiments, the method for determining the mapping relationship between the pixel coordinate system and the geodetic coordinate system includes:

determining a mapping relation between a pixel coordinate system and an image coordinate system based on the projection of the optical center of the camera on an imaging plane and the physical size of each pixel point in a live-action image acquired by the camera;

determining a mapping relation between an image coordinate system and a camera coordinate system based on the focal length of the camera;

determining a mapping relation between a camera coordinate system and a geodetic coordinate system based on an external parameter matrix of the camera;

and determining the mapping relation between the pixel coordinate system and the geodetic coordinate system based on the mapping relation between the pixel coordinate system and the image coordinate system, the mapping relation between the image coordinate system and the camera coordinate system and the mapping relation between the camera coordinate system and the geodetic coordinate system.

Specifically, the image coordinate system R is a plane coordinate system obtained by taking the projection of the optical center of the camera on the imaging plane as a midpoint and taking the live-action image as a coordinate area, and the unit is millimeter (mm).

The live-action image is generally a matrix image. The origin of the coordinates in the image coordinate system R is in the middle of the matrix image, and the origin of the pixel coordinate system U is in the upper left corner of the rectangular image. The units of the two are different from each other, and the unit of the image coordinate system is millimeter (mm) and the unit of the pixel coordinate system is pixel (pixel).

Fig. 2 is a schematic diagram of a mapping relationship between a pixel coordinate system and an image coordinate system provided in the present application, and as shown in fig. 2, U and v may be used to record the number of columns and rows of pixels in a pixel coordinate system U. In the image coordinate system R, the number of columns and rows of dots can be recorded in x, y. The origin (0, 0) of the coordinate system in the image coordinate system R is correspondingly recorded as Ou (U) in the pixel coordinate system U0,v0). The physical dimensions represented by each pixel in the x-axis and y-axis are denoted dx and dy. It should be noted that the image sensor size of the camera is not a square, so the dx and dy sizes are different.

The pixel point P11 is recorded as P11(U, v) in the pixel coordinate system U and P11(x11, y11) in the image coordinate system R. The relation can be found:

(u-u0)*dx=x11

(v-v0)*dy=y11

and (3) shifting the terms, and converting the terms into a homogeneous form to obtain:

whereinAs represented by point P11 in the pixel coordinate system,in the representation of the image coordinate system point P11,for pan and zoom operations.

The camera coordinate system C is established with the optical center position of the camera as the origin, with the axes pointing to the three-dimensional coordinate system directly in front of the camera. The image coordinate system R and the camera coordinate system C can be obtained by symmetrically overturning the pinhole camera model.

Fig. 3 is a schematic diagram of a mapping relationship between an image coordinate system and a camera coordinate system provided by the present application, and as shown in fig. 3, P1 is a point in the camera coordinate system C, whose coordinates are (x1C, y1C, z1C), and P11 is a corresponding pixel point of P1 in the image coordinate system R, whose coordinates are (x11, y 11). From the similar triangular relationships in the figure, one can obtain:

namely, it is

In the formula, f is the focal length of the camera, i.e. the distance from the imaging plane where the image coordinate system R is located to the origin of the camera coordinate system C. The above equation is converted to a homogeneous form to yield:

wherein the content of the first and second substances,is the representation manner of P11 corresponding to P1 in the image coordinate system,is the representation of P1 in the camera coordinate system.

The transformation of the geodetic coordinate system O and the camera coordinate system C belongs to the transformation of a rigid body, no deformation exists, and only rotation operation and translation operation are needed.

Fig. 4 is a schematic diagram of a mapping relationship between a camera coordinate system and a geodetic coordinate system provided by the present application, and as shown in fig. 4, a coordinate P1C of P1 in the camera coordinate system C is (x1C, y1C, z1C), and a coordinate P1O of P1 in the geodetic coordinate system O is (x1O, y1O, z 1O). The coordinate transformation may be expressed as:

p1c ═ R P1o + T i.e.

To unify rotation and translation, it is written in homogeneous form, i.e.Where R is the rotation matrix and T is the translation vector.

The rotation matrix R and the translational vector T are the external parameters of the camera.

To sum up, a mapping relationship between feature points (points in the geodetic coordinate system) of a target object and pixel points (points in the pixel coordinate system) in a live-action image needs to be established, and three steps of conversion are needed, namely, the geodetic coordinate system O is converted into the camera coordinate system C, then into the image coordinate system R, and finally into the pixel coordinate system U.

The conversion from the geodetic coordinate system O to the camera coordinate system C is formulated as:

in the formula, R3x3Is a rotation matrix with a size of 3 x3, T3x1The translation vector is 3 x1 in size.Is the coordinate of the point P1 in the geodetic coordinate system O, and can also be expressed as [ x1O, y1O, z1O,1]TIs the coordinate of point P1 in the camera coordinate system C, and can also be expressed as [ x1C, y1C, z1C,1]T

The conversion from the camera coordinate system C to the image coordinate system R is formulated as:

in the formula (I), the compound is shown in the specification,f is the focal length of the camera, representing the representation of P11 corresponding to P1 in the image coordinate system.

The conversion from the image coordinate system R to the pixel coordinate system U is formulated as:

in the formula (I), the compound is shown in the specification,is expressed by the point P11 in the pixel coordinate system U (U)0,v0) Is the seat of the projection point of the optical center of the camera on the imaging planeIn the index, dx and dy are the physical dimensions of each pixel point in the live-action image acquired by the camera in the x axis and the y axis respectively.

From the above equation, one can obtain:

simplifying to obtain:

the formula is an imaging formula of the camera under ideal conditions. It establishes a unique mapping from the geodetic coordinate system to the pixel coordinate system.

In the formula derivation, the plane may also be normalized, i.e., scaled by an equal ratio of the image coordinates, i.e., setting f to 1.

According to any of the above embodiments, the internal reference matrix of the camera is determined based on the mapping relationship between the pixel coordinate system and the image coordinate system and the mapping relationship between the image coordinate system and the camera coordinate system.

Specifically, according to the mapping relationship between the pixel coordinate system U and the image coordinate system R, it can be obtained that:

according to the mapping relationship between the image coordinate system R and the camera coordinate system C, the following can be obtained:

thus, the internal reference matrix of the camera can be obtained as follows:

in the formula (f)xIs the focal length of the camera in the X-axis direction, fyIs the focal length of the Y-axis of the camera, (u)0,v0) Is the coordinate of the projection point of the optical center of the camera on the imaging plane.

Based on any one of the above embodiments, the method for determining the internal reference matrix of the camera includes:

and calibrating the camera according to at least one of a DLT method, an RAC method, a plane calibration method, a plane circle calibration method and a parallel circle calibration method, and determining the internal reference matrix of the camera.

Specifically, the internal reference matrix of the camera can be obtained by means of camera calibration. The camera calibration process is a process of detecting the camera by using a standard measuring instrument and solving internal parameters of the camera.

Intrinsic parameters of the camera include: the method comprises the following steps that firstly, basic parameters of a camera, namely coordinates of an optical center of the camera on an imaging plane and a focal length of the camera, can be represented by an internal reference matrix; the second is distortion coefficient vector, namely tangential distortion coefficient and radial distortion coefficient.

The distortion of the camera includes tangential distortion and radial distortion. In the manufacturing process of the camera, the lens is usually thick in the middle and thin at the edge, and the light rays are distorted to a greater extent far away from the center of the lens. This is the cause of radial distortion. Tangential distortion arises from imperfections in the manufacture of the camera such that the image plane is not parallel to the lens itself.

The distortion of the camera is formulated as:

in the formula, x and y are coordinates of the image point after distortion; x ', y' are distortion correction, namely correct image point coordinates after distortion is removed, and k1, k2 and k3 are radial distortion coefficients; p1, p2 are tangential distortion coefficients, and r is a radius of curvature.

The calibration method used in the present application mainly includes a DLT (Direct Linear Transformation) method, an RAC (Radial Alignment Constraint) method, a plane calibration method, a plane circle calibration method, a parallel circle calibration method, and the like.

For example, a plane calibration method, also called a chessboard calibration method, has a basic principle that a plurality of images of the same chessboard template calibration plate at different angles are shot in a three-dimensional scene, and because the angular points of each chessboard image are equally spaced, the spatial three-dimensional coordinates of the angular points can be considered to be known, and then the pixel coordinates of each chessboard image in the image plane are calculated. Thus, the mapping relation between the three-dimensional space coordinate and the pixel coordinate of each chessboard pattern is established, and finally the internal parameters of the camera are solved.

The OpenCV package or MATLAB may be used. For example, the calibration is performed by using a calibretacarama () function provided by OpenCV, and the intrinsic parameters of the camera, including a camera intrinsic parameter matrix, can be calculated by using the functionDistortion coefficient vector D is (k1, k2, k3, p1, p 2).

Based on any embodiment, the method for determining the external parameter matrix of the camera comprises the following steps:

determining an internal reference matrix of the camera;

and determining an external parameter matrix of the camera based on the pixel coordinates and the space coordinates of the calibration points and the internal parameter matrix of the camera.

Specifically, the calibration point may select a known target object whose spatial coordinates in the geodetic coordinate system O and pixel coordinates in the pixel coordinate system U are both known.

According to the formula:

if P1 is the index point, the internal reference matrix of the cameraCan be obtained by the calibration method, and then the external parameter matrix [ R3x3 T3x1]Can be determined by the PnP (Passive-N-Point) algorithm.

The solution can be performed using the solvePnP and solvePnP pranslac functions in the OpenCV development kit. The result of the solution is the rotation vector R and translation vector T of the camera corresponding to the three-dimensional space coordinate system of the known object.

The embodiment of the application provides a method for solving coordinate points on image coordinates corresponding to spatial axis coordinate points, which comprises the following steps:

step one, obtaining an internal reference matrix of an industrial camera, namely a visual sensor, through camera calibration.

And step two, solving the external parameter matrix of the camera at the moment, namely the rotation vector and the translation vector at the moment by knowing the coordinates of a plurality of key points of the target object in the three-dimensional space, the pixel coordinates of the live-action image corresponding to the key points and the internal parameter matrix in the step one.

And thirdly, solving pixel coordinate points mapped by the space coordinate system according to the internal reference matrix and the external reference matrix of the camera and the origin (0.0,0.0,0.0) in the geodetic coordinate system, the X-axis coordinate (X,0.0,0.0), the Y-axis coordinate (0.0, Y,0.0) and the Z-axis coordinate (0.0,0.0, Z).

The above method can be solved using the projectPoints function provided by OpenCV.

According to any of the above embodiments, the three-dimensional attitude angle of the camera is determined based on the external parameter matrix of the camera.

Specifically, the three-dimensional attitude angle of the camera includes a Yaw angle (Yaw), a wheel angle (Roll), and a Pitch angle (Pitch). The yaw angle is an angle rotating around a Z axis, the roller angle is an angle rotating around a Y axis, and the pitch angle is an angle rotating around an X axis.

The external reference matrix R of the camera can be represented as:

wherein R isx(psi) is a matrix rotated around the X-axis psi, psi is the pitch angle:

Ry(θ) is a matrix of rotation θ about the Y axis, θ being the wheel angle:

to rotate about the Z axisThe matrix of (a) is,is the yaw angle:

the external reference matrix R can be represented as:

the method in the above embodiment can solve:

the three-dimensional attitude angle of the camera can be found.

Solving the three-dimensional attitude angle of the camera can be realized by using languages such as Python, Java, Matlab and the like.

The following describes the live-action image measuring device provided in the present application, and the live-action image measuring device described below and the live-action image measuring method described above may be referred to in correspondence with each other.

Based on any of the above embodiments, fig. 5 is a schematic structural diagram of a live-action image measuring device provided in the present application, as shown in fig. 5, the device includes:

a feature point determination unit 510 for determining a plurality of feature points on the target object to be measured;

a coordinate determining unit 520, configured to obtain spatial coordinates of any feature point in the geodetic coordinate system based on pixel coordinates of the feature point in the two live-action images and a mapping relationship between the pixel coordinate system and the geodetic coordinate system;

a spatial measurement unit 530 for determining a measurement result of the target object based on spatial coordinates of the plurality of feature points;

the two live-action images are obtained by acquiring a target object at two spatial positions with a preset interval by a camera.

Specifically, the feature point determining unit 510 is configured to determine a plurality of feature points on the target object to be measured, the coordinate determining unit 520 is configured to obtain a spatial coordinate of any feature point in the geodetic coordinate system based on pixel coordinates of any feature point in two live-action images and a mapping relationship between the pixel coordinate system and the geodetic coordinate system, and the spatial measuring unit 530 is configured to determine a measurement result of the target object based on the spatial coordinates of the plurality of feature points.

The live-action image measuring device provided by the embodiment of the application acquires a target object through the camera to obtain a live-action image, converts the pixel coordinate of each characteristic point of the target object in the live-action image to obtain the space coordinate of each characteristic point in the geodetic coordinate system according to the mapping relation between the pixel coordinate system and the geodetic coordinate system, further realizes the calculation of space parameters of the target object to obtain the measuring result of the target object, does not need workers to carry out field measurement, realizes the informatization of road property data, and improves the accuracy of the road property information acquisition and updating.

Based on any of the above embodiments, the coordinate determination unit 520 includes:

the positioning ray determining subunit is used for obtaining a positioning ray of the pixel coordinate of the feature point in the live-action image mapped to the space coordinate based on the pixel coordinate of the feature point in any live-action image and the mapping relation between the pixel coordinate system and the geodetic coordinate system;

and the space coordinate determining subunit is used for obtaining the space coordinate of any characteristic point in the geodetic coordinate system based on the intersection point of the positioning rays of the characteristic point which are mapped to the space coordinate by the pixel coordinates in the two live-action images.

Based on any of the above embodiments, the apparatus further includes a multi-coordinate system conversion unit, and the multi-coordinate system conversion unit includes:

the first coordinate conversion subunit is used for determining a mapping relation between a pixel coordinate system and an image coordinate system based on the projection of the optical center of the camera on an imaging plane and the physical size of each pixel point in a real image acquired by the camera;

the second coordinate conversion subunit is used for determining the mapping relation between the image coordinate system and the camera coordinate system based on the focal length of the camera;

the third coordinate conversion subunit is used for determining a mapping relation between a camera coordinate system and a geodetic coordinate system based on the external parameter matrix of the camera;

and the multi-coordinate conversion subunit is used for determining the mapping relation between the pixel coordinate system and the geodetic coordinate system based on the mapping relation between the pixel coordinate system and the image coordinate system, the mapping relation between the image coordinate system and the camera coordinate system and the mapping relation between the camera coordinate system and the geodetic coordinate system.

According to any of the above embodiments, the internal reference matrix of the camera is determined based on the mapping relationship between the pixel coordinate system and the image coordinate system and the mapping relationship between the image coordinate system and the camera coordinate system.

Based on any embodiment above, the apparatus further comprises:

and the internal reference matrix determining unit is used for calibrating the camera according to at least one of a DLT method, an RAC method, a plane calibration method, a plane circle calibration method and a parallel circle calibration method, and determining the internal reference matrix of the camera.

Based on any embodiment above, the apparatus further comprises:

and the external reference matrix determining unit is used for determining an internal reference matrix of the camera and determining the external reference matrix of the camera based on the pixel coordinates and the space coordinates of the calibration points and the internal reference matrix of the camera.

According to any of the above embodiments, the three-dimensional attitude angle of the camera is determined based on the external parameter matrix of the camera.

Based on any of the above embodiments, fig. 6 is a schematic structural diagram of an electronic device provided in the present application, and as shown in fig. 6, the electronic device may include: a Processor (Processor)610, a communication Interface (Communications Interface)620, a Memory (Memory)630 and a communication Bus (Communications Bus)640, wherein the Processor 610, the communication Interface 620 and the Memory 630 complete communication with each other through the communication Bus 640. The processor 610 may call the logic command in the memory 630 to execute the method provided by the above embodiments, the method includes:

determining a plurality of characteristic points on a target object to be measured; obtaining the space coordinate of any characteristic point in a geodetic coordinate system based on the pixel coordinates of any characteristic point in the two live-action images and the mapping relation between the pixel coordinate system and the geodetic coordinate system; determining a measurement result of the target object based on the spatial coordinates of the plurality of feature points; the two live-action images are obtained by acquiring a target object at two spatial positions with a preset interval by a camera.

In addition, the logic commands in the memory 630 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic commands are sold or used as independent products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including commands for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The processor in the electronic device provided in the embodiment of the present application may call a logic instruction in the memory to implement the live-action image measurement method, and the specific implementation manner of the method is consistent with the method implementation manner and may achieve the same beneficial effects, which is not described herein again.

The present application further provides a non-transitory computer-readable storage medium, which is described below, and the non-transitory computer-readable storage medium described below and the live-action image measurement method described above may be referred to in correspondence with each other.

Embodiments of the present application provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method provided in the foregoing embodiments when executed by a processor, and the method includes:

determining a plurality of characteristic points on a target object to be measured; obtaining the space coordinate of any characteristic point in a geodetic coordinate system based on the pixel coordinates of any characteristic point in the two live-action images and the mapping relation between the pixel coordinate system and the geodetic coordinate system; determining a measurement result of the target object based on the spatial coordinates of the plurality of feature points; the two live-action images are obtained by acquiring a target object at two spatial positions with a preset interval by a camera.

When the computer program stored on the non-transitory computer readable storage medium provided in the embodiment of the present application is executed, the method for measuring live-action images is implemented, and the specific implementation manner is consistent with the method implementation manner and can achieve the same beneficial effects, which is not described herein again.

The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.

Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes commands for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.

Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于视频的河流流量在线测流方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!