Image processing device, driving support system, and recording medium
阅读说明:本技术 图像处理装置、驾驶支援系统及记录介质 (Image processing device, driving support system, and recording medium ) 是由 大木豊 于 2019-02-12 设计创作,主要内容包括:实施方式提供一种实现偏航角的精度较高的修正的图像处理装置、驾驶支援系统及保存有图像处理程序的记录介质。实施方式的图像处理装置具备基准线检测部、视差计算部、基准线修正部和外部参数计算部。基准线检测部关于包括第1图像及第2图像的立体图像,检测上述第1图像的第1基准线及第2基准线。视差计算部计算上述第1图像与上述第2图像之间的视差。基准线修正部将上述第2图像的第1基准线的位置修正。外部参数计算部,基于检测出的上述第1图像中的上述第1基准线与上述第2基准线的距离、以及上述第1图像的上述第1基准线与上述第2图像的修正后的上述第1基准线的视差,计算在图像的修正中使用的外部参数。(An embodiment provides an image processing device, a driving support system, and a recording medium storing an image processing program, which realize correction of yaw angle with high accuracy. An image processing device according to an embodiment includes a reference line detection unit, a parallax calculation unit, a reference line correction unit, and an external parameter calculation unit. The reference line detecting unit detects a 1 st reference line and a 2 nd reference line of the 1 st image with respect to a stereoscopic image including the 1 st image and the 2 nd image. The parallax calculation unit calculates a parallax between the 1 st image and the 2 nd image. The reference line correcting unit corrects the position of the 1 st reference line in the 2 nd image. And an extrinsic parameter calculation unit for calculating an extrinsic parameter used for image correction based on the detected distance between the 1 st reference line and the 2 nd reference line in the 1 st image and the parallax between the 1 st reference line in the 1 st image and the 1 st reference line after correction in the 2 nd image.)
1. An image processing apparatus is provided with a plurality of image processing units,
the disclosed device is provided with:
a reference line detection unit for detecting a 1 st reference line and a 2 nd reference line of the 1 st image with respect to a stereoscopic image including the 1 st image and the 2 nd image;
a parallax calculation unit that calculates a parallax between the 1 st image and the 2 nd image;
a reference line correcting unit configured to correct a position of a 1 st reference line in the 2 nd image; and
and an extrinsic parameter calculation unit for calculating an extrinsic parameter used for image correction based on the detected distance between the 1 st reference line and the 2 nd reference line in the 1 st image and the parallax between the 1 st reference line in the 1 st image and the 1 st reference line after correction in the 2 nd image.
2. The image processing apparatus according to claim 1, wherein the reference line correcting section corrects the position of the 1 st reference line in the 2 nd image by a homography.
3. The image processing apparatus according to claim 1,
the reference line correcting unit corrects the position of the 1 st reference line in the 2 nd image without generating the corrected image of the 2 nd image.
4. The image processing apparatus according to claim 1,
the extrinsic parameter calculation unit calculates the extrinsic parameter by nonlinear optimization.
5. The image processing apparatus according to claim 1,
the reference line detecting unit does not detect the 1 st reference line and the 2 nd reference line of the 2 nd image, but detects the 1 st reference line and the 2 nd reference line of the 1 st image,
the image processing apparatus further includes a reference line estimating unit that estimates a position of the 1 st reference line of the 2 nd image based on the 1 st reference line of the 1 st image detected by the reference line detecting unit and the parallax calculated by the parallax calculating unit,
the reference line correcting unit corrects the position of the 1 st reference line in the 2 nd image based on the estimated 1 st reference line in the 2 nd image.
6. The image processing apparatus according to claim 5,
the reference line estimating unit estimates positions of a plurality of corresponding points in the 2 nd image based on the parallax calculated by the parallax calculating unit corresponding to the respective points, for a plurality of points belonging to the 1 st reference line in the 1 st image, and estimates the 1 st reference line of the 2 nd image based on the plurality of estimated points.
7. The image processing apparatus according to claim 1,
the reference line detecting unit detects the 1 st reference line and the 2 nd reference line of the 1 st image, and detects the 1 st reference line of the 2 nd image;
the reference line correcting unit corrects the position of the detected 1 st reference line in the 2 nd image.
8. The image processing apparatus according to any one of claims 1 to 7,
the image correction unit may correct the 1 st image and the 2 nd image based on the external parameter calculated by the external parameter calculation unit.
9. A driving support system for supporting a driver's driving,
the disclosed device is provided with:
a stereo camera including a 1 st camera for acquiring the 1 st image and a 2 nd camera for acquiring the 2 nd image;
the image processing apparatus according to claim 8, wherein the extrinsic parameters are calculated based on the 1 st image captured by the 1 st camera and the 2 nd image captured by the 2 nd camera, and the 1 st image and the 2 nd image are corrected; and
and a driving support device for outputting information for supporting driving based on the output of the image processing device.
10. A driving support system for supporting a driver's driving,
the disclosed device is provided with:
a stereo camera which is provided with a 1 st camera for acquiring a 1 st image and a 2 nd camera for acquiring a 2 nd image and acquires a stereo image;
a reference line detecting unit configured to detect at least a 1 st reference line and a 2 nd reference line of the 1 st image in the stereoscopic image;
a parallax calculation unit that calculates a parallax between the 1 st image and the 2 nd image;
a reference line correcting unit configured to correct a position of a 1 st reference line in the 2 nd image;
an extrinsic parameter calculation unit configured to calculate an extrinsic parameter used for image correction based on a detected distance between the 1 st reference line and the 2 nd reference line in the 1 st image and a parallax between the 1 st reference line of the 1 st image and the 1 st reference line after correction in the 2 nd image;
an image correction unit that corrects an image based on the external parameter; and
and a driving support unit for outputting information for supporting driving based on the corrected image.
11. A kind of recording medium is provided, which comprises a recording medium,
an image processing program is stored, which causes a computer to function as:
a reference line detection unit configured to detect at least a 1 st reference line and a 2 nd reference line of a 1 st image in a stereoscopic image including the 1 st image and a 2 nd image;
a parallax calculation unit that calculates a parallax between the 1 st image and the 2 nd image;
a reference line correcting unit for correcting the position of the 1 st reference line in the 2 nd image; and
and an extrinsic parameter calculation unit configured to calculate an extrinsic parameter used for image correction based on the detected distance between the 1 st reference line and the 2 nd reference line in the 1 st image and the parallax between the 1 st reference line in the 1 st image and the 1 st reference line after correction in the 2 nd image.
Technical Field
Embodiments of the present invention relate to an image processing apparatus, a driving support system, and a recording medium storing an image processing program.
Background
A technique is known for measuring a three-dimensional position of an object based on corresponding points of a stereoscopic image acquired by a stereoscopic camera from a positional relationship between the cameras and positions of the corresponding points on the respective images. This technique is used in various fields of vehicle-mounted driver assistance systems, portable devices, game machines, and the like. The stereo camera may have a deviation in an optical axis or the like due to temperature, vibration, aging, or the like. As 1 means for correcting such a deviation, there is correction (parallelization) of an image.
There is known a technique of correcting by detecting a deviation in the y direction (a direction intersecting the x direction in which the imaging units of the cameras are arranged) of the feature points imaged by the respective cameras. When the roll angle and the pitch angle vary, the deviation of the y component is expressed, and therefore correction is easy. However, the influence of the fluctuation of the yaw (yaw) angle, which has a small influence on the y component, is difficult to detect from the deviation of the y component, and the error is likely to be large due to other factors such as noise.
Disclosure of Invention
Therefore, an embodiment of the present invention provides an image processing device, a driving support system, and a recording medium storing an image processing program, which realize correction of yaw angle with high accuracy.
An image processing apparatus according to one aspect includes a reference line detection unit, a parallax calculation unit, a reference line correction unit, and an external parameter calculation unit. The reference line detecting unit detects a 1 st reference line and a 2 nd reference line of the 1 st image with respect to a stereoscopic image including the 1 st image and the 2 nd image. The parallax calculation unit calculates a parallax between the 1 st image and the 2 nd image. The reference line correcting unit corrects the position of the 1 st reference line in the 2 nd image. And an extrinsic parameter calculation unit for calculating an extrinsic parameter used for image correction based on the detected distance between the 1 st reference line and the 2 nd reference line in the 1 st image and the parallax between the 1 st reference line in the 1 st image and the 1 st reference line after correction in the 2 nd image.
Drawings
Fig. 1 is a block diagram showing functions of a driving support system according to an embodiment.
Fig. 2 is a flowchart showing a process performed by the image processing apparatus according to the embodiment.
Fig. 3 is a flowchart showing white line detection processing according to an embodiment.
Fig. 4 is a diagram showing an example of white line edges, parallax, and white line width according to an embodiment.
Fig. 5 is a block diagram showing functions of the driving support system according to the embodiment.
Fig. 6 is a flowchart showing a process performed by the image processing apparatus according to the embodiment.
Fig. 7 is a flowchart showing white line estimation processing according to an embodiment.
Fig. 8 is a diagram showing an example of a hardware configuration of an image processing apparatus according to an embodiment.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the figures to be referred to, the same reference numerals or similar reference numerals are given to the same portions or portions having the same functions, and redundant description thereof may be omitted. For convenience of explanation, the dimensional ratio of the drawings may be different from the actual ratio, or a part of the structure may be omitted from the drawings. Note that the data flow shown in each block diagram is shown as an example. That is, the embodiments described below do not indicate that there is no other data flow, and in some cases, the illustrated data flow may not be an essential configuration.
The parameters required in the calibration of a camera are largely divided into two types, i.e., an internal parameter and an external parameter. The intrinsic parameters are parameters relating to characteristics inherent to the camera such as a lens, and the extrinsic parameters are parameters indicating the installation state (position, posture, and the like) of the camera. The embodiments described below mainly relate to an embodiment of an image processing apparatus that estimates a yaw angle as an external parameter.
(embodiment 1)
Fig. 1 is a block diagram showing functions of a driving support system 1 according to embodiment 1. The driving support system 1 includes a
The
In the description of the embodiment, the calculation and the like are performed with reference to the 1 st image acquired by the 1
The
The external
The extrinsic
The white
The detection of the white line means that a line segment, a ray, or a straight line, which is an edge of both sides constituting the white line, is detected in the image in this manner. That is, both the edge from the road to the white line and the edge from the white line to the road are detected. The
The
The white
The extrinsic
The
For example, a distance image, an object recognition image, or the like is output using the corrected image. These images may not be configured as images, and may be data in which distance information corresponding to pixels is listed or data indicating an area in which an object is recognized. That is, the result output by the image correction unit is not necessarily image data, and may be data obtained from the corrected image.
The driving
The
Fig. 2 is a flowchart showing a flow of processing according to the present embodiment. The processing of each configuration will be described in more detail with reference to this flowchart. The 1
First, the white
Fig. 3 is a flowchart showing an example of the white line detection process. First, an edge image is created using a canny filter, a Laplacian (Laplacian) filter, or the like (S200). Next, a hough transform or the like is applied to the edge image to extract a line segment (S202). Next, a portion effective as a white line is extracted (S204). The portion effective as the white line is determined based on, for example, the following conditions: 1. has a length above a threshold; 2. the lower end of the extended line segment is contained in a certain area; 3. the slope of the line segment is within a threshold. Next, white lines are detected from information such as vanishing points (S206). For example, if it is a white line close to parallel, the extended lines of 4 line segments of 2 edges constituting the white line and 2 edges of a pair of white lines on the opposite side of the image intersect at the vanishing point. In this way, when the vanishing point is present or each of the 4 straight lines crosses the region within the predetermined range, the line segment group is detected as the edge of the white line.
The detection of the white line is not limited to this, and for example, a feature point which is an edge of the white line may be extracted focusing on the brightness and the detection may be performed by connecting portions on a straight line, or an image may be input to a mechanical learning model in which the white line is previously learned and extracted to detect the white line.
Returning to fig. 2, after the detection of the white line, the
In addition, the white line that has been detected may or may not be used in the calculation of the parallax. For example, when the expression of the white lines corresponding to each other in the 1 st image and the 2 nd image is obtained, the parallax may be calculated using the expression of the white lines. On the other hand, in the case where the information of the white line is not used, the steps of S100 and S102 may be exchanged. In fig. 2, S100 and S102 are sequentially executed, but the present invention is not limited to this, and parallel processing may be performed.
Subsequently, the white
The parameters relating to the roll angle and the pitch angle may be calculated by the white
The homography conversion of the white line in the 2 nd image arbitrarily extracts 2 points from points existing on the edge of the white line on the left (1 st reference line), for example, and acquires coordinates of the points converted by applying the homography matrix to the 2 points. From the coordinates of the acquired points, the expression of the straight line of the edge of the white line on the left side is calculated. In this way, the position of the white line (the 1 st reference line) in the 2 nd image is corrected by converting points on a straight line, not by converting the entire image, and calculating the expression of at least the 1 st reference line after conversion. That is, in this step, the output of the image is not essential, and it is sufficient to output an expression representing at least 1 of the 2 straight lines, or coordinates of a point (2 points or more) on at least 1 of the 2 straight lines.
Next, the external
Fig. 4 is a diagram for explaining an example of calculation of external parameters according to the present embodiment. In fig. 4, the 1 st image is an image obtained from the 1
The white lines are generally parallel to each other in pairs. Therefore, the external parameters are corrected by performing the optimization as follows. In addition, the parallelism need not be strictly parallel.
As shown in the figure, in the 1 st image and the 2 nd image, the y direction of the white line is corrected by the homography so that the same distance from the imaging surface and the same y component are obtained. For example, the distance between the 1 point on the 1 st reference line L1, which is the left white line edge, and the 1 point on the 2 nd reference line L2, which is the right white line edge, corresponding to the same y component in the 1 st image is defined as the white line width D1. That is, the distance between the points of the 1 st reference line and the 2 nd reference line having the same y component is defined as the white line width. If the X components of each point are X1 and X2, they are given as D1 ═ X2-X1. Further, if the X component of the point of the 2 nd image corresponding to the point on the left, that is, the point on the 1 st reference line R1 is X3, the parallax of the point on the edge is represented as X1 to X3. Similarly, points on the left white line edge L1 are used, denoted as D2-X5-X4, and parallax-X4-X6. Further, the white line width and the parallax are calculated for an arbitrary number of points on the left white line edge L1. The
After the white line width and the parallax are calculated, the following equation is optimized, so that the optimization can be performed by a nonlinear optimization method using a parameter expressed by a yaw angle.
[ formula 1]
More specifically, optimization of the yaw angle is performed by optimizing the parameters so that the value represented by formula 1 is close to 0. Here, the white line width 1 and the parallax 1 indicate the white line width and the parallax corresponding to a certain Y component Y1, and the white line width 2 and the parallax 2 indicate the white line width and the parallax corresponding to a different Y component Y2. For example, in the case of selecting 10 points from L1, { } of the above formula is performed with respect to a combination of all of 2 points extracted from these 10 points2And the sum of them is calculated. The parameter of the yaw angle is then optimized so that the value represented by the above equation becomes small. This means that optimization is performed so that the parallax becomes 0 at the vanishing point (the point where the white line width is 0).
After the parameter of the yaw angle is calculated and output by the external
In fig. 4, the description is given with reference to the 1 st reference line as the left white line edge and the 2 nd reference line as the right white line edge, but the present invention is not limited thereto. The edge of the white line on the right side can be used as the 1 st reference line. Since the 1 st reference line and the 2 nd reference line are straight lines as references for correction, any of the straight lines as references may be used as the 1 st reference line and the 2 nd reference line.
The calculated extrinsic parameters of the current frame may be stored in the
When the external parameter is stored in the
Note that the calculation of the parallax may not be performed at S102. That is, in fig. 2, the step of S102 may be omitted. In this case, in S106, the parallax may be calculated based on the position of the white line of the 1 st image detected in S100 and the position of the white line of the 2 nd image corrected in S104, and the calculation of the external parameter may be performed. In this way, each step can be appropriately moved to a step before a step requiring information acquired in the step.
As described above, according to the present embodiment, it is possible to calculate the extrinsic parameters by correcting the information of the white lines, instead of correcting the entire image. In this way, by correcting only the information of the white line, it is possible to suppress the calculation cost and the storage cost, and further, it is possible to perform the correction in real time, for example, in the driving support.
More specifically, in order to obtain a correct yaw angle correction value, it is necessary to use corrected images, and therefore correction values for the roll angle and the pitch angle are necessary. Since the delay is increased if the yaw angle correction value is obtained after the roll angle and the pitch angle are calculated, in order to avoid this, the image of the past frame is stored in the memory in advance, and the yaw angle correction is performed based on the stored information. According to the present embodiment, it is not necessary to store such images of past frames, and the memory consumption can be reduced. In addition, in many cases, a plurality of frames are required for estimating the yaw angle, and in this case, the delay further increases. According to the present embodiment, it is also possible to avoid an increase in delay associated with such estimation of yaw angle.
(embodiment 2)
In embodiment 1 described above, it is necessary to detect a white line in image 2, but in this embodiment, an external parameter is calculated without detecting a white line in image 2.
Fig. 5 is a block diagram showing functions of the driving support system 1 according to the present embodiment. The external
The white line estimation unit 210 estimates the 1 st reference line of the 2 nd image based on the 1 st reference line and the 2 nd reference line of the 1 st image detected by the white
Fig. 6 is a flowchart showing the processing of the
First, the white
Next, the white line estimation unit 210 estimates a white line in the 2 nd image based on the detection result of the white line edge in the 1 st image and the parallax (S304). For example, a case where the 1 st reference line R1 of the 2 nd image is estimated from the 1 st reference line L1 of the 1 st image shown in fig. 4 will be described.
Fig. 7 is a flowchart showing an estimation process of a white line in the 2 nd image. First, a plurality of points constituting the 1 st reference line of the 1 st image detected by the white
Next, for the extracted plurality of points, a point existing on the 1 st reference line on the 2 nd image is estimated using the parallax (S402). More specifically, for each point on the extracted 1 st image, the point on the 2 nd image corresponding to the point is calculated by using the parallax information at the point, and the estimation is performed. This estimation is not necessarily estimated on a straight line in the 2 nd image since it is performed using the calculated parallax.
Therefore, the estimated points are then subjected to regression analysis or the like, for example, by using the least square method to estimate the white line edge, i.e., the expression of the 1 st reference line R1 in this case (S404). In addition, the estimation of the white line does not require the generation of an estimated image, as in the correction of the white line of the above embodiment, and it suffices to calculate at least the expression of the 1 st reference line R1.
As described above, the white line estimation unit 210 estimates the white line edge of the 2 nd image and information in at least 1 reference line based on the detected information of the white line edge of the 1 st image and the calculated parallax between the 1 st image and the 2 nd image.
Returning to fig. 6, the white
As described above, according to the present embodiment, it is possible to execute the extrinsic parameter calculation process without generating the correction image as in the above embodiment, without executing the white line detection of the 2 nd image. Since the processing for white line detection is generally a higher-cost processing than other processing, the calculation cost can be further reduced according to the present embodiment.
Fig. 8 is a block diagram showing an example of hardware installation of the
Although the
The
The
The
The
The
The
The
As described above, in all the above description, at least a part of the
For example, the computer can be configured as the apparatus according to the above-described embodiment by reading out dedicated software stored in a computer-readable storage medium by the computer. The kind of the storage medium is not particularly limited. Further, the computer can be configured as the device according to the above-described embodiment by installing dedicated software downloaded via a communication network in the computer. In this way, the information processing of the software is specifically installed using hardware resources.
Several embodiments of the present invention have been described, but these embodiments are presented as examples and are not intended to limit the scope of the invention. These new embodiments may be implemented in other various forms, and various omissions, substitutions, and changes may be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalent scope thereof.
For example, in all the above embodiments, for the sake of simplicity of explanation, the parallax and the width between the white lines are obtained based on the detected white lines, but the present invention is not limited to this. Instead of the white line, a line of another color such as an orange line, a green line, or a blue line may be used, or a curb, a guide rail, a wall on the side of a road, a road sign provided on a road, or the like may be used as long as the line is formed in a parallel straight line shape. Further, as long as parallax can be obtained on the same line in the 1 st image and the 2 nd image, the width of the region existing between other objects, such as the width of the curb and the white line, may be obtained instead of the width between the white lines. In this case, the white
- 上一篇:一种医用注射器针头装配设备
- 下一篇:一种目标算法的测试方法、装置和系统