Camera module installation method and mobile platform

文档序号:91111 发布日期:2021-10-08 浏览:35次 中文

阅读说明:本技术 一种摄像头模组的安装方法及移动平台 (Camera module installation method and mobile platform ) 是由 李俊超 陈伟 卢佐旻 于 2021-03-24 设计创作,主要内容包括:一种摄像头模组的安装方法及移动平台,用于自动驾驶或智能驾驶。包括:将摄像头模组安装在移动平台上,摄像头模组与移动平台所在的地面平行;其中摄像头模组包括镜头组和图像传感器,镜头组包括至少一个镜头;镜头组的中心在图像传感器平面的投影为第一位置;第一位置与图像传感器中心的距离大于第一阈值,第一阈值大于0。这样,由于摄像头模组可以与移动平台所在的地面平行,可以保证运动方向与图像传感器平面垂直,可以提高摄像头运动估计的准确性和鲁棒性,进而减小目标定位误差,并且无需将摄像头模组倾斜安装,就可以满足所需的探测范围。该方法可以应用于车联网,如车辆外联V2X、车间通信长期演进技术LTE-V、车辆-车辆V2V等。(An installation method of a camera module and a mobile platform are used for automatic driving or intelligent driving. The method comprises the following steps: installing a camera module on the mobile platform, wherein the camera module is parallel to the ground where the mobile platform is located; the camera module comprises a lens group and an image sensor, wherein the lens group comprises at least one lens; the projection of the center of the lens group on the image sensor plane is a first position; the distance between the first position and the center of the image sensor is larger than a first threshold value, and the first threshold value is larger than 0. Therefore, the camera module can be parallel to the ground where the mobile platform is located, the motion direction can be perpendicular to the plane of the image sensor, the accuracy and robustness of camera motion estimation can be improved, target positioning errors are reduced, and the camera module does not need to be installed obliquely, so that the required detection range can be met. The method can be applied to the Internet of vehicles, such as vehicle external V2X, long term evolution technology for vehicle-to-vehicle communication LTE-V, vehicle-to-vehicle V2V and the like.)

1. The utility model provides an installation method of camera module which characterized in that:

installing the camera module on a mobile platform, wherein the camera module is parallel to the ground where the mobile platform is located;

the camera module comprises a lens group and an image sensor, wherein the lens group comprises at least one lens;

the projection of the center of the lens group on the image sensor plane is a first position;

the distance between the first position and the center of the image sensor is larger than a first threshold value, and the first threshold value is larger than 0.

2. The mounting method of claim 1, wherein an optical axis of the lens group and a normal of the image sensor plane are parallel to a ground surface on which the moving platform is located.

3. The mounting method according to claim 1 or 2, wherein a connection line between the first position and the center of the image sensor is perpendicular to a direction of a horizontal axis in a first coordinate system, the first coordinate system being a rectangular coordinate system established with the center of the image sensor as an origin, a horizontal direction to the right as a positive direction of the horizontal axis, and a vertical direction downward as a positive direction of a vertical axis.

4. A mounting method according to any one of claims 1-3, wherein the distance between the first position and the centre of the image sensor is related to a first angle between the bisector of the angle of the vertical field of view VFOV of the camera module and the optical axis of the lens group.

5. The mounting method of claim 4, wherein the first angle is greater than 0 degrees and less than or equal to half of a VFOV complement angle of the camera module.

6. The mounting method according to claim 4 or 5, wherein the optical center or the extended focus FOE corresponding to the camera module is on the ordinate y of the first coordinate system0The absolute value of (a) corresponds to the following formula:

|y0|=|fyh(θ)|

wherein f isyFor the focus at camera module center, theta is first angle, and h (theta) is tan (theta), or h (theta) is the monobasic N function of theta, and N is the integer that is greater than 0, first coordinate system uses the image sensor center is the origin, and the level is the positive direction of abscissa right, and vertical rectangular coordinate system who establishes down for the positive direction of ordinate.

7. The mounting method according to any one of claims 1 to 5, wherein the optical center or the extended focus FOE corresponding to the camera module is on the ordinate y of the first coordinate system0The following formula is satisfied:

wherein f isyIs the focal length of the center of the camera module,for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

8. The mounting method according to any one of claims 1 to 5, wherein the optical center or the extended focus FOE corresponding to the camera module is on the ordinate y of the first coordinate system0The following formula is satisfied:

y0=-fyθd

wherein f isyIs the focal length of the center of the camera module, thetadIn relation to the value of theta 1, for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

9. The mounting method according to any one of claims 1 to 5, wherein the optical center or the extended focus FOE corresponding to the camera module is on the ordinate y of the first coordinate system0The following formula is satisfied:

wherein f isyIs the focal length of the center of the camera module, theta0To prepareThe degree is set according to the number of the points,for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

10. The mounting method according to any one of claims 1 to 5, wherein the optical center or the extended focus FOE corresponding to the camera module is on the ordinate y of the first coordinate system0The following formula is satisfied:

y0=fyθs

wherein f isyIs the focal length of the center of the camera module, thetasIn relation to the value of theta 2, is VFOV, theta of the camera module0For the preset number of degrees, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axle to the right, and vertical downward is the rectangular coordinate system that the positive direction of axis of ordinates was established.

11. A mobile platform is characterized by comprising a camera module, wherein the camera module comprises a lens group and an image sensor, and the lens group comprises at least one lens; wherein:

the camera module is parallel to the ground where the mobile platform is located;

the projection of the center of the lens group on the image sensor plane is a first position;

the distance between the first position and the center of the image sensor is larger than a first threshold value, and the first threshold value is larger than 0.

12. The mobile platform of claim 11, wherein an optical axis of the lens group and a normal of the image sensor plane are both parallel to a ground surface on which the mobile platform is located.

13. The mobile platform as claimed in claim 11 or 12, wherein a connection line between the center of the lens group and the center of the image sensor is perpendicular to a direction of a transverse axis in a first coordinate system, the first coordinate system is a rectangular coordinate system established with the center of the image sensor as an origin, a horizontal direction to the right as a positive direction of the transverse axis, and a vertical direction downward as a positive direction of a longitudinal axis.

14. The mobile platform of any one of claims 11-13, wherein a distance between the first position and the center of the image sensor is associated with a first angle between an angle bisector of a vertical field of view VFOV of the camera module and an optical axis of the lens group.

15. The mobile platform of claim 14, wherein the first angle is greater than 0 degrees and less than or equal to half of a VFOV complementary angle of the camera module.

16. The mobile platform of claim 14 or 15, wherein the optical center or the extended focus FOE corresponding to the camera module is located on the ordinate y of the first coordinate system0The absolute value of (a) corresponds to the following formula:

|y0|=|fyh(θ)|

wherein f isyFor the focus at camera module center, theta is first angle, and h (theta) is tan (theta), or h (theta) is the monobasic N function of theta, and N is the integer that is greater than 0, first coordinate system uses the image sensor center is the origin, and the level is the positive direction of abscissa right, and vertical rectangular coordinate system who establishes down for the positive direction of ordinate.

17. The mobile platform of any one of claims 11-15, further comprisingCharacterized in that the optical center or the expansion focus FOE corresponding to the camera module is on the ordinate y of the first coordinate system0The following formula is satisfied:

wherein f isyIs the focal length of the center of the camera module,for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

18. The mobile platform of any one of claims 11-15, wherein the optical center or extended focus FOE of the camera module is on the ordinate y of the first coordinate system0The following formula is satisfied:

y0=-fyθd

wherein f isyIs the focal length of the center of the camera module, thetadIn relation to the value of theta 1, for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

19. The mobile platform of any one of claims 11-15, wherein the optical center or extended focus FOE of the camera module is on the ordinate y of the first coordinate system0The following formula is satisfied:

wherein f isyIs the focal length of the center of the camera module, theta0The number of the degrees is preset to be,for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

20. The mobile platform of any one of claims 11-15, wherein the optical center or extended focus FOE of the camera module is on the ordinate y of the first coordinate system0The following formula is satisfied:

y0=fyθs

wherein f isyIs the focal length of the center of the camera module, thetasIn relation to the value of theta 2, is VFOV, theta of the camera module0For the preset number of degrees, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axle to the right, and vertical downward is the rectangular coordinate system that the positive direction of axis of ordinates was established.

Technical Field

The application relates to the technical field of sensors, in particular to a camera module mounting method and a mobile platform.

Background

The lens group and the image sensor are two important components of the camera module. The lens group may be a group of convex (concave) lenses, and the image sensor is an imaging surface. The image sensor converts light transmitted from the lens group into an electric signal, and then converts the electric signal into a digital signal through analog-digital conversion inside the image sensor to form an image.

At present, the camera module can be applied to realize the detection of the surrounding environment, and then the three-dimensional reconstruction of the surrounding environment is carried out, so that the target positioning is realized. For example, when the camera module is applied to a vehicle-mounted sensing system, the functions of the camera module mainly include detection and recognition of objects such as vehicles, pedestrians, general obstacles, lane lines, road markers, traffic signboards and the like around a vehicle, measurement of distance and speed of the detected objects, estimation of camera motion (i.e., vehicle motion) (including rotation and translation), and three-dimensional reconstruction of surrounding environment based on the estimation, so as to realize target positioning.

The camera module is mounted on a mobile platform, such as a vehicle and other moving objects, and the accuracy and robustness of camera motion estimation are also required to be considered, so that the target positioning error is reduced.

Disclosure of Invention

The application provides an installation method of a camera module and a mobile platform, which are used for solving the problem that in order to meet a required detection range, the installation method can influence the accuracy and robustness of camera motion estimation, and further causes a large target positioning error.

In a first aspect, the present application provides a method for installing a camera module, which may specifically include: installing the camera module on a mobile platform, wherein the camera module is parallel to the ground where the mobile platform is located; the camera module comprises a lens group and an image sensor, wherein the lens group comprises at least one lens; the projection of the center of the lens group on the image sensor plane is a first position; the distance between the first position and the center of the image sensor is larger than a first threshold value, and the first threshold value is larger than 0.

By the installation method, the camera module can be parallel to the ground where the mobile platform is located, the motion direction can be ensured to be perpendicular to the plane of the image sensor, so that the accuracy and robustness of camera motion estimation can be improved, the target positioning error is reduced, the sensing capability of the part above or below the center of the lens group can be increased by the fact that the distance between the first position of the projection of the center of the lens group on the plane of the image sensor and the center of the image sensor is larger than the first threshold value, and the required detection range can be met without installing the camera module in an inclined mode.

In one possible design, the optical axis of the lens group and the normal of the image sensor plane are both parallel to the ground on which the mobile platform is located. Therefore, the motion direction can be ensured to be vertical to the plane of the image sensor, and the accuracy and robustness of camera motion estimation can be improved.

In a possible design, the first position with the line at image sensor center is perpendicular with the cross axle direction in the first coordinate system, first coordinate system uses the image sensor center is the origin, and the level is the cross axle positive direction right, and vertical downward is the rectangular coordinate system that the axis of ordinates positive direction was established. Therefore, the distance between the first position and the center of the image sensor only exists in the vertical direction, and the distance does not exist in the horizontal direction, so that the detection performance of the camera module can be ensured.

In one possible design, the distance between the first position and the center of the image sensor is related to a first angle between an angle bisector of a vertical field of view (VFOV) angle of the camera module and the optical axis of the lens group. Specifically, the first angle may represent a direction of the VFOV of the camera module, and may further indicate a position of an actual detection range, and a required detection range may be obtained by adjusting the first angle. This allows the distance between the first position and the centre of the image sensor to be determined according to the actual detection requirements.

In one possible design, the first angle is greater than 0 degrees and less than or equal to half of the VFOV complementary angle of the camera module. The degree of the first angle to be adopted can be determined according to the actually required detection range so as to determine the direction of the VFOV of the camera module, and further determine the distance between the first position and the center of the image sensor so as to meet the actual detection requirement.

In one possible design, the first position is higher than the center of the image sensor, so that the perception capability of the part below the center of the lens group can be increased, namely, the information of the area which is originally not imaged on the image sensor can be received by the image sensor through the lens group more; alternatively, the first position is lower than the center of the image sensor, so that the sensing capability of the portion above the center of the lens group can be increased, i.e., information of the area that was not originally imaged on the image sensor can be received by the image sensor through the lens group more.

In one possible design, the optical center or the vertical coordinate y of the extended focus FOE corresponding to the camera module in the first coordinate system0The absolute value of (a) corresponds to the following formula:

|y0|=|fyh(θ)|

wherein f isyAnd the first coordinate system is a rectangular coordinate system which is established by taking the center of the image sensor as an origin, taking the horizontal right as the positive direction of a horizontal axis and taking the vertical downward as the positive direction of a vertical axis.

In one possible design, the optical center or the vertical coordinate y of the extended focus FOE corresponding to the camera module in the first coordinate system0Conform toThe following equation:

wherein f isyIs the focal length of the center of the camera module,for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

In one possible design, the optical center or the vertical coordinate y of the extended focus FOE corresponding to the camera module in the first coordinate system0The following formula is satisfied:

y0=-fyθd

wherein f isyIs the focal length of the center of the camera module, thetadRelated to θ 1, θdMay be a function g (theta 1) related to theta 1, for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

In one possible design, the optical center or the vertical coordinate y of the extended focus FOE corresponding to the camera module in the first coordinate system0The following formula is satisfied:

wherein f isyIs the focal length of the center of the camera module, theta0The number of the degrees is preset to be,for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

In one possible design, the optical center or the vertical coordinate y of the extended focus FOE corresponding to the camera module in the first coordinate system0The following formula is satisfied:

y0=fyθs

wherein f isyIs the focal length of the center of the camera module, thetasRelated to θ 2, θsMay be a function g (theta 2) related to theta 2, is VFOV, theta of the camera module0For the preset number of degrees, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axle to the right, and vertical downward is the rectangular coordinate system that the positive direction of axis of ordinates was established.

In a second aspect, the present application provides a mobile platform, where the mobile platform is provided with a camera module parallel to a ground where the mobile platform is located, the camera module includes a lens group and an image sensor, and the lens group includes at least one lens; wherein: the projection of the center of the lens group on the image sensor plane is a first position; the distance between the first position and the center of the image sensor is larger than a first threshold value, and the first threshold value is larger than 0.

Therefore, the camera module can be parallel to the ground where the mobile platform is located, the moving direction can be guaranteed to be perpendicular to the plane of the image sensor, the accuracy and robustness of camera motion estimation can be improved, the target positioning error is further reduced, the distance between the first position of the lens group center projected on the plane of the image sensor and the center of the image sensor is larger than a first threshold value, the sensing capacity of the part above or below the center of the lens group can be increased, and the required detection range can be met without installing the camera module in an inclined mode.

In one possible design, the optical axis of the lens group and the normal of the image sensor plane are both parallel to the ground on which the mobile platform is located. Therefore, the motion direction can be ensured to be vertical to the plane of the image sensor, and the accuracy and robustness of camera motion estimation can be improved.

In a possible design, the center of the lens group with the line at image sensor center is perpendicular with the cross axle direction in the first coordinate system, first coordinate system uses the image sensor center is the origin, and the level is the cross axle positive direction right, and vertical downward is the rectangular coordinate system that the axis of ordinates positive direction was established. Therefore, the distance between the first position and the center of the image sensor only exists in the vertical direction, and the distance does not exist in the horizontal direction, so that the performance of the camera module can be ensured.

In one possible design, the distance between the first position and the center of the image sensor is related to a first angle between an angle bisector of a vertical field of view VFOV of the camera module and the optical axis of the lens group. Specifically, the first angle may represent a direction of the VFOV of the camera module, and may further indicate a position of an actual detection range, and a required detection range may be obtained by adjusting the first angle. This allows the distance between the first position and the centre of the image sensor to be determined according to the actual detection requirements.

In one possible design, the first angle is greater than 0 degrees and less than or equal to half of the VFOV complementary angle of the camera module. The degree of the first angle to be adopted can be determined according to the actually required detection range so as to determine the direction of the VFOV of the camera module, and further determine the distance between the first position and the center of the image sensor so as to meet the actual detection requirement.

In one possible design, the first position is higher than the center of the image sensor, so that the perception capability of the part below the center of the lens group can be increased, namely, the information of the area which is originally not imaged on the image sensor can be received by the image sensor through the lens group more; alternatively, the first position is lower than the center of the image sensor, so that the sensing capability of the portion above the center of the lens group can be increased, i.e., information of the area that was not originally imaged on the image sensor can be received by the image sensor through the lens group more.

In one possible design, the optical center or the vertical coordinate y of the extended focus FOE corresponding to the camera module in the first coordinate system0The absolute value of (a) corresponds to the following formula:

|y0|=|fyh(θ)|

wherein f isyFor the focus at camera module center, theta is first angle, and h (theta) is tan (theta), or h (theta) is the monobasic N function of theta, and N is the integer that is greater than 0, first coordinate system uses the image sensor center is the origin, and the level is the positive direction of abscissa right, and vertical rectangular coordinate system who establishes down for the positive direction of ordinate.

In one possible design, the optical center or the vertical coordinate y of the extended focus FOE corresponding to the camera module in the first coordinate system0The following formula is satisfied:

wherein f isyIs the focal length of the center of the camera module,for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

In one possible design, the camera is arranged to capture imagesThe optical center or the vertical coordinate y of the extended focus FOE corresponding to the head module in the first coordinate system0The following formula is satisfied:

y0=-fyθd

wherein f isyIs the focal length of the center of the camera module, thetadRelated to θ 1, e.g. θdMay be a function g (theta 1) related to theta 1, for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

In one possible design, the optical center or the vertical coordinate y of the extended focus FOE corresponding to the camera module in the first coordinate system0The following formula is satisfied:

wherein f isyIs the focal length of the center of the camera module, theta0The number of the degrees is preset to be,for the VFOV of camera module, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axis right, vertical rectangular coordinate system who establishes down for the positive direction of axis of ordinates.

In one possible design, the optical center or the vertical coordinate y of the extended focus FOE corresponding to the camera module in the first coordinate system0The following formula is satisfied:

y0=fyθs

wherein f isyIs the focal length of the center of the camera module, thetasWith phase of theta 2Off, thetasMay be a function g (theta 2) related to theta 2, is VFOV, theta of the camera module0For the preset number of degrees, first coordinate system uses the image sensor center is the original point, and the level is the positive direction of cross axle to the right, and vertical downward is the rectangular coordinate system that the positive direction of axis of ordinates was established.

Drawings

Fig. 1 is a schematic diagram of an exploded view of a camera module provided in the present application;

FIG. 2 is a schematic diagram showing an assembled position of a lens group and an image sensor in the prior art;

FIG. 3 is a schematic diagram of the perceived VFOV and horizontal field of view of a prior art camera module;

FIG. 4 is a schematic view illustrating a front view camera installed in a vehicle-mounted around view sensing system in the prior art;

FIG. 5 is a schematic view of a front view camera installed in a vehicle-mounted front view sensing system in the prior art;

fig. 6 is an installation schematic diagram of a camera module provided in the present application;

fig. 7 is a side view of a camera module provided in the present application;

fig. 8 is a projection view of a camera module according to the present disclosure;

fig. 9 is a projection view of another camera module provided in the present application;

fig. 10 is a schematic view illustrating an installation of another camera module provided in the present application;

fig. 11 is an installation diagram of another camera module provided in the present application;

fig. 12 is a schematic diagram of estimating camera motion through matched feature points in two frames of images shot by a camera module according to the present application;

FIG. 13 is a schematic illustration of a solution provided herein;

fig. 14 is a schematic view illustrating an influence of a translation direction of a camera module on translation vector estimation according to the present application.

Detailed Description

The embodiments of the present application will be described in detail below with reference to the accompanying drawings.

The embodiment of the application provides an installation method of a camera module and a mobile platform, which are used for solving the problem that the installation method influences the accuracy and robustness of camera motion estimation for a required detection range, and further causes a larger target positioning error.

At least one referred to in this application means one or more; plural means two or more.

In the description of the present application, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, nor order.

The main components of a Current Camera Module (CCM) can be as shown in the exploded view of the camera module in fig. 1. In fig. 1, the camera module may mainly include a lens group and an image sensor. The lens group may include at least one lens (lens), for example, the lens group in fig. 1 may include a lens 1, a lens 2, and a lens 3. Wherein at least one lens included in the lens groups may be a convex (concave) lens. Specifically, as shown in fig. 1, the lens group may be fixed by a lens barrel and a lens mount with an optical filter therebetween. The image sensor may be a semiconductor chip having a chip photosensitive region thereon. Specifically, the image sensor may convert light transmitted from the lens group into an electrical signal, and then convert the electrical signal into a digital signal through analog-digital conversion to form an image.

Illustratively, as shown in fig. 1, the camera module may further include a circuit board, where the circuit board is a support for electronic components and is a carrier for electrical interconnection of the electronic components.

It should be noted that the number of lenses included in the lens group shown in fig. 1 is merely an example, and in practice, the lens group may include more or less lenses, which is not limited in the present application.

Currently, during the assembly process of a camera module, it is usually ensured that the center of the lens group is aligned with the center of the image sensor, for example, as can be seen from the assembly position diagram of the lens group and the image sensor shown in fig. 2 and the projection diagram of the camera module shown in fig. 3, the projection of the center of the lens group on the plane of the image sensor coincides with the center of the image sensor (i.e., the center is aligned). In this assembly mode, limited by the high width ratio of the image sensor, the vertical field of view (VFOV) that can be actually perceived by the camera module is usually small. Illustratively, the perceivable VFOV and horizontal field of view (HFOV) of the camera module may be as shown in fig. 3, wherein VFOV is the opening angle of the lens group to the height and HFOV is the opening angle of the lens group to the width, only the height and width being illustrated in fig. 3.

The camera module can be widely applied to various scenes needing pose estimation and target object three-dimensional reconstruction, so that the surrounding environment is detected through the camera module, the surrounding environment is subjected to three-dimensional reconstruction, and target positioning is achieved. For example, the camera module can be applied to a mobile platform and the like to realize detection of the surrounding environment of the mobile platform and the like. For example, the camera module can be applied to a vehicle, and particularly can be applied to an on-vehicle sensing system. For example, the camera module can be used for a front-view camera of the vehicle-mounted front-view sensing system, and also can be applied to a front-view camera, a side-view camera or a rear-view camera of the vehicle-mounted around-view sensing system. Wherein, the view Field (FOV) of the front-view camera, the side-view camera or the rear-view camera can be 40-60 degrees of the general specification, can be 23-40 degrees which is narrower, can be 100-180 degrees which is wider, and the like. Illustratively, the front-view camera, the side-view camera or the rear-view camera may be specifically a monocular camera, a binocular camera or a fisheye camera.

Illustratively, when the camera module is applied to a vehicle-mounted sensing system, the functions of the camera module mainly include detection and identification of objects such as vehicles, pedestrians, general obstacles, lane lines, road markers, traffic signboards and the like around a vehicle, measurement of distances and speeds of the detected objects, estimation of camera motion (including rotation and translation), and three-dimensional reconstruction of surrounding environment on the basis of the estimation, so as to realize object positioning.

In a specific example, when the camera module is applied to the vehicle-mounted looking-around sensing system, the environment around the vehicle body is detected based on the camera module. In order to guarantee that the blind area around the automobile body is small enough, can incline the camera module downwards to the ground when the installation to guarantee its VFOV along pressing close to the edge of the automobile body down. For example, as shown in the installation diagram of the front view camera in the vehicle-mounted around view sensing system shown in fig. 4, assuming that the VFOV of the front view camera is 120 degrees, in order to meet the detection range, the camera module may be tilted downward by 30 degrees (i.e., the optical axis of the camera module is tilted downward by 30 degrees), so as to ensure that the VFOV lower edge of the camera module is close to the front bumper of the vehicle.

It should be noted that, in the vehicle-mounted all-round sensing system, the installation principle of the side-view camera and the rear-view camera is the same as that of the front-view camera, and the installation principle can be referred to each other, and will not be described in detail herein.

In another specific example, when the camera module is applied to an on-vehicle forward-looking sensing system, the camera module is used for detecting targets such as traffic signs and traffic lights. For example, as shown in the installation diagram of the front view camera in the vehicle-mounted front view sensing system shown in fig. 5, it is assumed that the VFOV of the front view camera is 40 degrees, which is limited by the vehicle engine cover, and the view field of the horizontal downward part of the front view camera is about 12 degrees. In order to satisfy the detection range, the camera module may be tilted up by 8 degrees during installation (i.e., the optical axis of the camera module is tilted up by 8 degrees).

In both examples shown in fig. 4 and 5, the camera module is mounted obliquely in order to satisfy the detection range. Therefore, the motion direction is not in a perpendicular relation with the plane of the image sensor, so that the accuracy and robustness of camera motion estimation are affected, and further, a target positioning error is large. Based on the above, the application provides an installation method of a camera module and a mobile platform, so as to improve accuracy and robustness of camera motion estimation and further reduce target positioning errors.

In order to more clearly describe the technical solution of the embodiment of the present application, the following describes in detail an installation method of a camera module and a mobile platform provided in the embodiment of the present application with reference to the accompanying drawings.

The embodiment of the application provides an installation method of a camera module, which specifically comprises the following steps: for example, the installation schematic diagram of the camera module can be as shown in fig. 6, and it should be noted that the shape of the camera module and the installation position on the mobile platform in fig. 6 are only an example, and the application is not limited thereto. The camera module may include a lens group and an image sensor, the lens group including at least one lens; the projection of the center of the lens group on the image sensor plane is a first position; the distance between the first position and the center of the image sensor is larger than a first threshold value, and the first threshold value is larger than 0. Illustratively, the side view of the camera module can be as shown in fig. 7 (a) or fig. 7 (b).

In one embodiment, the center of the lens group and the center of the image sensor in the existing camera module are absolutely aligned without considering the assembly error, and the projection of the center of the lens group on the image sensor plane is coincident with the center of the image sensor, that is, the distance between the projection position and the center of the image sensor is 0. That is to say, the first threshold value in the camera module related to the present application is greater than 0, without considering assembly errors.

In another embodiment, in the case of considering the assembling error, an error value exists when the center of the lens group in the existing camera module is aligned with the center of the image sensor, and the projection position of the center of the lens group on the image sensor plane is away from the center of the image sensor by the error value, i.e. the center of the lens group is considered to be aligned with the center of the image sensor. That is, in consideration of an assembly error, the first threshold value in the camera module related to the present application is greater than the error value.

Specifically, the camera is mounted on the mobile platform, and when the camera module is parallel to the ground where the mobile platform is located, the optical axis of the lens group and the normal of the image sensor plane are both parallel to the ground where the mobile platform is located. Therefore, the accuracy and robustness of camera motion estimation can be improved, the target positioning error can be reduced, and the target positioning accuracy is improved. For example, the mobile platform may be a motor vehicle, a drone, a rail car, a bicycle, a signal lamp, a speed measuring device, or a network device (e.g., a base station, a terminal device in various systems), and so on. For example, the camera module can be installed on movable equipment such as transportation equipment, home equipment, robots, and holders. This application does not do the restriction to the terminal equipment type of installation camera module and the function of camera module.

For example, a connection line between the first position and the center of the image sensor may be perpendicular to a direction of a cross axis in a first coordinate system, where the first coordinate system is a rectangular coordinate system established with the center of the image sensor as an origin, a horizontal right direction as a positive direction of the cross axis, and a vertical downward direction as a positive direction of the longitudinal axis. That is, the projection position of the center of the lens group on the image sensor plane is distant from the image sensor center in the vertical direction, and is distant from the image sensor center in the horizontal direction.

In an alternative embodiment, as shown in fig. 7 (a), the first position may be higher than the image sensor center. In this case, the projection view of the camera module can be as shown in fig. 8, and it can be seen in fig. 8 that the projection position of the lens group on the image sensor plane (i.e., the first position) is higher than the center of the image sensor.

Specifically, the camera module shown in fig. 8 can increase the sensing capability of the portion below the center of the lens group relative to the camera module shown in fig. 3, that is, more information of the area that has failed to be imaged on the image sensor can be received by the image sensor through the lens group.

In another alternative embodiment, as shown in fig. 7 (b), the first position may be lower than the image sensor center. In this case, the projection view of the camera module can be as shown in fig. 9, and it can be seen in fig. 9 that the projection position of the lens group on the image sensor plane (i.e., the first position) is lower than the image sensor center.

Specifically, the camera module shown in fig. 9 can increase the sensing capability of the portion above the center of the lens group relative to the camera module shown in fig. 3, that is, more information of the area that has failed to be imaged on the image sensor can be received by the image sensor through the lens group.

In one embodiment, the distance between the first position and the center of the image sensor may be related to a first angle, wherein the first angle may be an angle between an angle bisector of a VFOV of the camera module and an optical axis of the lens group. Optionally, the first angle may be greater than 0 degree and less than or equal to half of the VFOV complementary angle of the camera module.

Wherein a distance between the first position and the center of the image sensor is an absolute value of a vertical coordinate of an optical center or an extended Focus (FOE) in the first coordinate system corresponding to a camera module described below.

Specifically, the optical center or FOE corresponding to the camera module is on the ordinate y of the first coordinate system0May conform to the following equation one:

|y0|=|fyh (theta) | formula one;

wherein f isyIs the focal length of the center of the camera module, theta is the first angle, h (theta) is tan (theta), or h (theta) is a unary nth-order function of theta, and N is a unary nth-order function larger than thetaAn integer of 0. Where | x | is the absolute value of the parameter therein, tan (, denotes the tangent function.

Alternatively, the FOE may be a convergence point of optical flow on a stationary object when the camera module is moving based on the mobile platform.

In the above method, the first angle may be an angle to be adjusted, that is, a detection range detected by the camera module needs to be adjusted on the basis of meeting the VFOV.

In an alternative embodiment, in the actual imaging, the coordinates of the optical center, which are calibrated in the camera of the camera module shown in fig. 7 (a) and 8, are located at the upper half of the image sensor plane, and the coordinates of the FOE are also located at the upper half of the image sensor plane, when y is in the first coordinate system0If the value is negative, y can be obtained based on the formula I0The following equation two may be met:

y0=-fyh (θ) is expressed by formula two.

In another alternative embodiment, in the actual imaging, the coordinates of the optical center, which are calibrated during the camera internal calibration of the camera module as shown in fig. 7 (b) and fig. 9, are located in the lower half of the image sensor plane, and the coordinates of the FOE are also located in the lower half of the image sensor plane, when y is in the first coordinate system0If the value is positive, y can be obtained based on the formula one0The following formula three may be met:

y0=fyh (θ) equation three.

Optionally, since the imaging models of the camera modules are different, the function h (θ) of θ may also be different. For example, when the imaging model of the camera module is a pinhole imaging model, h (θ) ═ tan (θ). For another example, when the imaging model of the camera module is a fisheye imaging model, h (θ) may be a unitary nth function of θ. Alternatively, the unary nth function of θ may be an unary 9 th function of θ, for example, h (θ) ═ θ (1+ k)1θ2+k2θ4+k3θ6+k4θ8) Wherein k is1,k2,k3,k4Four coefficients in the fisheye imaging model. E.g. k1,k2,k3,k4The values of (a) can be respectively: -1.2101823606265119, 2.348159905176264, -2.8413822488946474, 1.3818466241138192; also for example, k1,k2,k3,k4The values of (a) can be respectively: -1.1529851704803267, 2.114443595798193, -2.458009210238794, 1.1606670303240054; also for example, k1,k2,k3,k4The values of (a) can be respectively: -1.1741024894366126,2.1870282871688733, -2.5272904743180695,1.170976436497773. Of course, k1,k2,k3,k4The value of (a) can also be other values, which are not listed here one by one.

The unitary 9-degree function of θ is merely an example, and is not limited to h (θ).

It should be noted that the above only illustrates the possible h (θ) by taking two imaging models as an example, and is not limited to h (θ), and h (θ) may also be represented by various other functions, which are not listed herein.

In an alternative embodiment, the distance between the first position and the center of the image sensor may be related to the VFOV of the camera module. Similarly, the distance between the first position and the center of the image sensor is an absolute value of an optical center or a vertical coordinate of FOE in the first coordinate system corresponding to the camera module mentioned below.

In one example, the optical center or FOE corresponding to the camera module is on the ordinate y of the first coordinate system0The following equation four may be met:

wherein f isyBeing central to said camera moduleThe focal length of the lens is set to be,is the VFOV of the camera module.

Optionally, when the imaging model of the camera module is a pinhole imaging model, the method of the formula four above may be adopted.

In another example, the optical center or FOE corresponding to the camera module is on the ordinate y of the first coordinate system0The following equation five may be met:

y0=-fyθda formula V;

wherein f isyIs the focal length of the center of the camera module, thetadRelated to θ 1, e.g. θdMay be a function g (theta 1) related to theta 1, is the VFOV of the camera module.

Optionally, when the imaging model of the camera module is a fisheye imaging model, the method of the formula five may be adopted. At this time, θd=θ1(1+k1θ12+k2θ14+k3θ16+k4θ18),k1,k2,k3,k4Four coefficients in the fisheye imaging model. E.g. k1,k2,k3,k4The values of (a) can be respectively: -1.2101823606265119, 2.348159905176264, -2.8413822488946474, 1.3818466241138192; also for example, k1,k2,k3,k4The values of (a) can be respectively: -1.1529851704803267, 2.114443595798193, -2.458009210238794, 1.1606670303240054; also for example, k1,k2,k3,k4The values of (a) can be respectively: -1.1741024894366126,2.1870282871688733, -2.5272904743180695,1.170976436497773. Of course, k1,k2,k3,k4The value of (a) can also be other values, which are not listed here one by one.

In addition, θ is defined asdThe calculation method of (1) is only an example, and there may be other various methods, which are not limited in this application.

In practical implementation, when the camera module is the camera module shown in fig. 7 (a) and fig. 8, the methods in the above formula four and formula five may be adopted.

In yet another example, the optical center or FOE corresponding to the camera module is along the ordinate y in the first coordinate system0The following equation six may be met:

wherein f isyIs the focal length of the center of the camera module, theta0The number of the degrees is preset to be,is the VFOV of the camera module.

Optionally, theta0May be 0.21 radians (rad), etc., or θ0May be 12 degrees, etc., of course0Other degrees are also possible, and the application is not limited thereto.

Optionally, when the imaging model of the camera module is a pinhole imaging model, the method of the formula six may be adopted.

In yet another example, the optical center or FOE corresponding to the camera module is along the ordinate y in the first coordinate system0The following formula seven may be followed:

y0=fyθsa formula seven;

wherein f isyIs the focal length of the center of the camera module, thetasRelating to theta 2, e.g. thetasMay be a function g (theta 2) related to theta 2, is the VFOV of the camera module.

Optionally, when the imaging model of the camera module is a fisheye imaging model, the method of the formula seven may be adopted. At this time, θs=θ2(1+k1θ22+k2θ24+k3θ126+k4θ28),k1,k2,k3,k4Four coefficients in the fisheye imaging model. E.g. k1,k2,k3,k4The values of (a) can be respectively: -1.2101823606265119, 2.348159905176264, -2.8413822488946474, 1.3818466241138192; also for example, k1,k2,k3,k4The values of (a) can be respectively: -1.1529851704803267, 2.114443595798193, -2.458009210238794, 1.1606670303240054; also for example, k1,k2,k3,k4The values of (a) can be respectively: -1.1741024894366126,2.1870282871688733, -2.5272904743180695,1.170976436497773. Of course, k1,k2,k3,k4The value of (a) can also be other values, which are not listed here one by one.

In addition, θ is defined assThe calculation method of (1) is only an example, and there may be other various methods, which are not limited in this application.

In practical implementation, when the camera module is the camera module shown in fig. 7 (b) and fig. 9, the methods in the above formulas six and seven may be adopted.

Illustratively, the mobile platform may be a vehicle or the like.

For example, when the camera module is mounted on a vehicle, the camera modules shown in fig. 7 (a) and 8 may be applied to an in-vehicle look-around sensing system, and the camera modules shown in fig. 7 (b) and 9 may be applied to an in-vehicle look-ahead sensing system. In an implementation manner, by using the method for installing the camera module provided by the embodiment of the application, when the camera module is installed on a vehicle, the camera module does not need to be installed obliquely during installation, and the detection requirement can be met.

In an optional implementation manner, when the mobile platform is a vehicle and the camera module mounted on the vehicle is applied to the vehicle-mounted around-looking sensing system, the camera module does not need to be inclined downward to the ground during mounting, and therefore it is ensured that blind areas around the vehicle body are small enough. For example, when the camera module is applied to a front view camera of the vehicle-mounted around view sensing system, assuming that the VFOV of the camera module is 120 degrees, an installation diagram of the camera module may be as shown in fig. 10. As can be seen from fig. 10, the optical axis of the camera module is parallel to the ground where the vehicle is located, so that the moving direction can be ensured to be perpendicular to the plane of the image sensor, and therefore, the accuracy and robustness of camera motion estimation can be improved, and further, the target positioning error is reduced.

In yet another alternative embodiment, when the mobile platform is a vehicle and the camera module mounted on the vehicle is applied to the vehicle-mounted forward-looking sensing system, the camera module does not need to be mounted obliquely upward during mounting, and the sensing range of objects such as traffic signs, traffic lights and the like can be increased. For example, when the camera module is applied to the vehicle-mounted forward-looking sensing system, assuming that the VFOV of the camera module is 40 degrees, the installation diagram of the camera module may be as shown in fig. 11. As can be seen from fig. 11, the optical axis of the camera module is parallel to the ground where the vehicle is located, so that the moving direction can be ensured to be perpendicular to the plane of the image sensor, and therefore, the accuracy and robustness of camera motion estimation can be improved, and further, the target positioning error is reduced.

By adopting the installation method of the camera module, the camera module can be parallel to the ground where the mobile platform is located, so that the motion direction can be ensured to be vertical to the plane of the image sensor, the accuracy and robustness of camera motion estimation can be improved, the target positioning error is reduced, and the camera module does not need to be installed obliquely, so that the required detection range can be met.

Based on the above description, an embodiment of the present application further provides a mobile platform, including a camera module, where the camera module includes a lens group and an image sensor, and the lens group includes at least one lens; wherein: the camera module is parallel to the ground where the mobile platform is located; the projection of the center of the lens group on the image sensor plane is a first position; the distance between the first position and the center of the image sensor is larger than a first threshold value, and the first threshold value is larger than 0. Specifically, for a detailed description of the camera module, reference may be made to the related description in the foregoing embodiments, and details are not repeated here. In an alternative embodiment, the mobile platform may be, but is not limited to, a vehicle or the like.

Based on the above embodiments, by adopting the method for installing the camera module provided by the embodiment of the application, the camera module is installed on the mobile platform, so that when the target is positioned, the estimation of the camera motion through the matched feature points in the two frames of images shot by the camera module is an important link in the spatial three-dimensional reconstruction algorithm. Taking FIG. 12 as an example, suppose that two frames of image I are obtained1、I2In between, i.e. to find the motion from the first frame image I1To the second frame image I2Rotation R and translation t of the camera. The centers of the lens groups corresponding to the two frames of images are O respectively1、O2. Consider I1A characteristic point p in1In I2Corresponding to feature point p2. The feature points in the two images are matched or correspond to each other, and the feature points in the two images represent the projection of the same spatial three-dimensional point P on the two images.

Analyzing the geometric relationship from an algebraic angle, and setting the spatial position of P as follows in a coordinate system of the first frame image: p ═ X, Y, Z]T

When the imaging model of the camera module is assumed to be a small hole imaging model, according to the small holeImaging model, P point in two frame image I1、I2Pixel point p in1、p2May conform to the following equation eight:

s1p1=KP,s2p2k (RP + t) formula eight;

wherein K is a camera reference matrix.

In a homogeneous coordinate system, a vector will equal itself multiplied by an arbitrary non-zero constant. This is typically used to express a projective relationship. E.g. p1And s1p1In a projective relationship, p1And s1p1Equal in the sense of a homogeneous coordinate system, or equal in the sense of scale, can be written as: s1p1~p1

Then, two projections in equation eight can be written as: p is a radical of1~KP,p2~K(RP+t);

Let x1=K-1p1,x2=K-1p2Substituting the formula as follows: x is the number of2~Rx1+t;

Both sides simultaneously left-multiplied by tThen simultaneously left-multiplyingThen there are:wherein t is [ t ]1,t2,t3]T

Due to the equal sign of the vector t in the left sidex2Sum vector x2Vertical, so the inner product of the two is 0, so there is the following formula nine:

can be calledThe ninth formula is an antipodal constraint. Defining the essence matrix E ═ tR, is a 3 × 3 matrix.

Assume a pair of matching points with normalized coordinates of x1=[u1,v1,1]T,x2=[u2,v2,1]T. According to the antipodal constraint, there are:

the matrix E is expanded and written into vector form as follows: e ═ e1,e2,e3,e4,e5,e6,e7,e8,e9]T

The antipodal constraint can then be written in linear form as follows: [ u ] of2u1,u2v1,u2,v2u1,v2v1,v2,u1,v1,1]·e=0。

In general, E can be estimated using 8 pairs of feature points out of n. Putting all the points into one equation can become a linear system of equations as shown in equation ten:

ae is 0 formula ten

Wherein the content of the first and second substances,

it is easy to prove that the solution of the linear equation set shown in the above formula ten is the matrix ATAnd A is the eigenvector corresponding to the minimum eigenvalue.

After solving for E, the rotation R and translation t of the camera can be recovered by Singular Value Decomposition (SVD). Let the SVD of E be: e ═ U ∑ VTWherein U, V is an orthogonal matrix, and Σ is a singular value matrix. For any one E, there are two possible R, t that correspond to it as follows:

wherein R isz(π/2) represents a rotation matrix obtained by rotating 90 ° along the Z axis. Thus, from E decomposition to R, t, there are a total of 4 possible solutions, as can be seen in fig. 13.

As can be seen from fig. 13, only P in the first solution (shown in fig. 13 (a)) has a positive depth in both cameras. Therefore, by substituting any point into the 4 solutions and detecting the depth of the point under the two cameras, it is possible to determine which solution is correct.

Let B be ATA, diagonalized by an orthogonal matrix H, i.e. H-1BH=diag{λ1,λ2,...,λnIn which λ isiN is the eigenvalue of matrix B.

Without loss of generality, let λ1Is a simple eigenvalue, and let λ1<λ2≤λ3≤…≤λnThen let the corresponding feature vector be stretched into H ═ H1,h2,...,hn]Wherein e is the corresponding characteristic value λ1I.e. e is at h1In the space formed by stretching.

Let B (∈) be B + deltaBRepresents that noise is superimposed on B, wherein ∈ is a matrixThe maximum values of (d) are then:wherein

Therefore, there is | bijLess than or equal to 1. Memory matrix deltaBHas a minimum eigenvalue of λ1(. e), the corresponding feature vector is: e (∈) e + deltae. Wherein, deltaeAt { h2,h3,...,hnIn the space spanned by the four plates.

When ∈ is sufficiently small, the error λ1(∈)-λ1Can be expanded into a series number lambda1(∈)-λ1=a1∈+a22+a33+ …, where the linear part may be represented as:

let H2=[h2,h3,...,hn]. It is easy to prove that there is a vector g of dimension (n-1)1,g2,g3,., make deltae=∈H2g1+∈2H2g2+∈3H2g3+ …, where the linear part may be represented as: e.g. H2g1=HΔHTΔBe. Where Δ may conform to the following equation eleven:

Δ=diag{0,(λ12)-1,...,(λ1n)-1formula eleven.

Discarding the second and higher order terms in the error series, the eigenvalue λ1May be such thatThe error of the feature vector e may be

When matrix A is a non-degenerate matrix, i.e. matrix A has a rank of 8, λ10. While when the rank of matrix a is less than 8, the solution of matrix E is noise sensitive. The noise mainly comes from feature point detection errors, feature point matching errors, quantization errors, camera internal reference calibration errors and the like. Specifically, when the rank of matrix A is less than 8, there is λ1≈λ2. As can be seen from the above formula eleven, the second term of Δ becomes infinite, which makes the estimation error of the matrix E also become infinite.

The above noise effect is reflected as follows when the camera module actually works:

as can be seen from the above equation nine and the above calculation process of R, t: t (x)2×Rx1) 0. I.e. t and x2×Rx1Are perpendicular to each other.

When the translation vector t is perpendicular to the image sensor plane XY, x is shown in FIG. 14 (a)2×Rx1Covering a larger area (see shaded area); and, as shown in fig. 14 (b), when the translation vector t is parallel to the image sensor plane XY, x2×Rx1The coverage area is small (see shaded area). When the matrix a is affected by noise, the shadow areas will deviate from their original positions, thereby introducing errors in the estimation of the translation vector t. It is apparent that the scenario shown in (a) of fig. 14 has higher robustness than the scenario shown in (b) of fig. 14 at this time.

Based on the above analysis, it can be seen that: in order to ensure the accuracy and robustness of camera motion estimation, the camera translation direction (motion direction) should be perpendicular to the image sensor plane.

Based on the above analysis, it is obvious that, by adopting the method for installing the camera module provided in the embodiment of the present application, when the camera module is installed on the mobile platform, the camera module can be parallel to the ground where the mobile platform is located, specifically, both the optical axis of the lens group of the camera module and the normal line of the image sensor plane can be parallel to the ground where the mobile platform is located, as shown in fig. 10 and 11, that is, the camera motion direction is perpendicular to the image sensor plane, so as to improve the accuracy and robustness of the camera motion estimation. That is to say, compared with the case that the camera module is installed on the mobile platform in the prior art, as the camera module is installed obliquely as shown in fig. 4 and 5, the motion direction of the camera cannot be perpendicular to the plane of the image sensor, and the accuracy and robustness of the motion estimation of the camera are affected, after the installation method of the camera module provided by the embodiment of the present application is adopted, the accuracy and robustness of the motion estimation of the camera can be obviously improved.

As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.

It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:成像元件、摄像装置、成像元件的工作方法及程序

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类