Object detection device and method

文档序号:1041589 发布日期:2020-10-09 浏览:6次 中文

阅读说明:本技术 物体检测装置及方法 (Object detection device and method ) 是由 石崎将崇 于 2020-03-23 设计创作,主要内容包括:本发明涉及一种物体检测装置及方法。物体检测装置构成为从由立体照相机拍摄的图像获取视差图像,根据视差图像导出世界坐标系中的物体的坐标。物体检测装置构成为根据转向角信息及叉车的尺寸信息导出世界坐标系中的行进预定区域。物体检测装置构成为提取世界坐标系中位于行进预定区域上的物体作为优先物体。物体检测装置构成为使优先物体比与优先物体不同的物体优先地进行是否为人的判定。(The invention relates to an object detection device and method. The object detection device is configured to acquire a parallax image from an image captured by the stereo camera and derive coordinates of an object in the world coordinate system from the parallax image. The object detection device is configured to derive a travel-scheduled region in a world coordinate system from steering angle information and forklift dimension information. The object detection device is configured to extract an object located on a travel-scheduled area in the world coordinate system as a priority object. The object detection device is configured to determine whether or not a person is present in priority to a priority object over an object different from the priority object.)

1. An object detection device configured to be mounted on a vehicle, the object detection device comprising:

a parallax image acquisition unit configured to acquire a parallax image in which parallax is associated with each pixel from an image captured by a stereo camera;

a coordinate derivation unit configured to derive coordinates of an object in a world coordinate system as a coordinate system in an actual space from the parallax image;

a steering angle acquisition unit configured to acquire steering angle information of the vehicle;

a planned travel region derivation unit configured to derive a planned travel region of the vehicle in the world coordinate system based on the steering angle information and the vehicle size information;

an extraction unit configured to extract a priority object, which is an object located in the scheduled travel area in the world coordinate system; and

and a person determination unit configured to give priority to the priority object over the object different from the priority object, and perform a person detection process on coordinates of the object on the image.

2. The object detection device according to claim 1, wherein the person determination section is configured to perform the person detection process on the coordinates of the object on the image with the priority object closest to the vehicle being given the highest priority.

3. The object detection device according to claim 1 or 2, wherein the scheduled traveling region derivation section is configured to make the scheduled traveling region longer in the traveling direction of the vehicle as the speed of the vehicle is higher.

4. The object detection device according to any one of claims 1 to 3, wherein the scheduled travel area derivation section is configured to make the scheduled travel area wider in the vehicle width direction of the vehicle as the speed of the vehicle is lower.

5. An object detection method for detecting an object by an object detection device configured to be mounted on a vehicle, the method comprising:

acquiring a parallax image in which parallax is associated with each pixel from an image captured by a stereo camera;

deriving coordinates of an object in a world coordinate system as a coordinate system on an actual space from the parallax image;

acquiring steering angle information of the vehicle;

deriving a predetermined region of travel of the vehicle in the world coordinate system from the steering angle information and the size information of the vehicle; extracting an object located on the travel preset area in the world coordinate system, namely a priority object; and

the priority object is made to take priority over the object different from the priority object, and the coordinates of the object on the image are subjected to human detection processing.

Technical Field

The invention relates to an object detection device and method.

Background

An object detection device for detecting an object such as a person or an obstacle is mounted on a moving body such as a vehicle. The object detection device described in japanese patent laid-open No. 2017-151815 divides an image captured by an imaging device into a plurality of regions, and extracts an image to be subjected to recognition processing with each region as a target. The object detection device performs a human detection process on the recognition processing target image. The image to be identified is extracted based on image processing such as luminance gradient or half-conversion. The setting of the region is set based on at least one of the turning direction, the turning speed, the traveling direction, and the traveling speed.

In the technique of japanese patent laid-open No. 2017-151815, the area in which the human detection process is performed is large and the processing load of the object detection device is large.

The object of the present invention is to provide an object detection device capable of reducing a processing load.

Disclosure of Invention

One aspect of the present invention to achieve the above object provides an object detection device configured to be mounted on a vehicle. The object detection device is provided with: a parallax image acquisition unit configured to acquire a parallax image in which parallax is associated with each pixel from an image captured by a stereo camera; a coordinate derivation unit configured to derive coordinates of an object in a world coordinate system as a coordinate system in an actual space from the parallax image; a steering angle acquisition unit configured to acquire steering angle information of the vehicle; a planned travel region derivation unit configured to derive a planned travel region of the vehicle in the world coordinate system based on the steering angle information and the vehicle size information; an extraction unit configured to extract a priority object, which is an object located in the scheduled travel area in the world coordinate system; and a person determination unit configured to give priority to the priority object over the object different from the priority object, and perform person detection processing on coordinates of the object on the image.

Another aspect of the present invention to achieve the above object provides a method of detecting an object by an object detection device mounted on a vehicle. The method comprises the following steps: acquiring a parallax image in which parallax is associated with each pixel from an image captured by a stereo camera; deriving coordinates of an object in a world coordinate system as a coordinate system on an actual space from the parallax image; acquiring steering angle information of the vehicle; deriving a predetermined region of travel of the vehicle of the world coordinate system from the steering angle information and the size information of the vehicle; extracting an object located on the travel preset area in the world coordinate system, namely a priority object; and giving priority to the priority object over the object different from the priority object, and performing a human detection process on coordinates of the object on the image.

Drawings

Fig. 1 is a perspective view of a forklift carrying a cargo detection device.

Fig. 2 is a schematic block diagram of the forklift and the monitoring device.

Fig. 3 is a diagram showing the 1 st image.

Fig. 4 is a flowchart showing a process performed by the object detection device.

Fig. 5 is a diagram showing coordinates of an object in the XY plane in the world coordinate system.

Fig. 6 is a diagram for explaining a method of deriving a travel-scheduled area.

Fig. 7 is a diagram showing a positional relationship between an object on the XY plane and a travel-scheduled region in the world coordinate system.

Detailed Description

Hereinafter, an embodiment of the object detection device will be described.

As shown in fig. 1, a forklift 10 as a vehicle includes a vehicle body 11, 2 drive wheels 12 and 13 disposed in a front lower portion of the vehicle body 11, 2 steered wheels 14 disposed in a rear lower portion of the vehicle body 11, and a load handling device 16. The drive wheels 12 and 13 are disposed apart from each other in the vehicle width direction. The 2 steered wheels 14 are disposed adjacent to each other in the vehicle width direction. The 2 steered wheels 14 are disposed at the center between the drive wheels 12 and 13 in the vehicle width direction. When 2 steered wheels 14 arranged adjacently are regarded as 1 steered wheel 14, the forklift 10 can be regarded as a three-wheeled forklift. The vehicle body 11 includes a head guard (head guard)15 provided at an upper portion of the driver seat. In the forklift 10 of the present embodiment, the traveling operation and the cargo handling operation are performed by the operation of the rider.

As shown in fig. 2, the forklift 10 includes a main controller 20, a travel motor M1, a travel control device 23 for controlling the travel motor M1, a vehicle speed sensor 24, an orientation sensor 25, and a steering angle sensor 26. The main controller 20 performs control related to the travel operation and the cargo handling operation. The main controller 20 includes a CPU (Central Processing Unit) 21 and a memory 22 in which programs for performing various kinds of control are stored.

The orientation sensor 25 detects an operation direction of an orientation lever indicating a traveling direction. The azimuth sensor 25 detects whether the azimuth lever is operated in a direction to instruct forward movement or in a direction to instruct backward movement, based on the neutral position. The orientation sensor 25 outputs the detection result to the main controller 20. The steering angle sensor 26 detects a steering angle θ 1 of the steered wheels 14. The steering angle sensor 26 outputs the detection result to the main controller 20.

The CPU21 of the main controller 20 gives a command of the rotation speed of the travel motor M1 to the travel control device 23 so that the vehicle speed of the forklift 10 becomes the target speed. The travel control device 23 of the present embodiment is a motor driver. The vehicle speed sensor 24 of the present embodiment is a rotation speed sensor that detects the number of revolutions per unit time, i.e., the rotation speed, of the travel motor M1. The vehicle speed sensor 24 outputs the rotation speed of the travel motor M1 to the travel control device 23. The travel control device 23 controls the travel motor M1 based on a command from the master controller 20 so that the rotation speed of the travel motor M1 matches the command. The main controller 20 can acquire the detection result of the vehicle speed sensor 24 from the travel control device 23.

The monitoring device 30 is mounted on the forklift 10. The monitoring device 30 includes a stereo camera 31 and an object detection device 41 that detects an object from an image captured by the stereo camera 31. As shown in fig. 1, the stereo camera 31 is disposed on the face shield 15. The stereo camera 31 is disposed so as to be able to overlook the road surface on which the forklift 10 travels from above the forklift 10. The stereo camera 31 of the present embodiment photographs the rear of the forklift 10. Therefore, the object detected by the object detection device 41 is an object behind the forklift 10.

As shown in fig. 2, the stereo camera 31 includes 2 cameras 32 and 33. As the cameras 32 and 33, for example, a CCD (Charge Coupled Device) image sensor or a CMOS (complementary metal oxide semiconductor) image sensor is used. The cameras 32 and 33 are arranged so that their optical axes are parallel to each other. In the present embodiment, 2 cameras 32 and 33 are arranged in a horizontal direction. One of the 2 cameras 32 and 33 is referred to as a1 st camera 32, and the other is referred to as a2 nd camera 33. When the image captured by the 1 st camera 32 is defined as the 1 st image and the image captured by the 2 nd camera 33 is defined as the 2 nd image, the same object is reflected in the 1 st image and the 2 nd image while being shifted in the lateral direction. Specifically, when the same object is captured, a shift occurs in the pixel [ px ] in the horizontal direction between the object reflected in the 1 st image and the object reflected in the 2 nd image according to the distance between the cameras 32 and 33. The number of pixels of the 1 st image and the 2 nd image is the same, and for example, an image of 640 × 480[ px ] ═ VGA is used. The 1 st image and the 2 nd image are images represented by RGB signals.

The object detection device 41 includes a CPU42 and a storage unit 43 including a RAM (Random Access Memory), a ROM (Read Only Memory), and the like. The storage unit 43 stores various programs for detecting an object from an image captured by the stereo camera 31. The object detection device 41 may be provided with dedicated hardware, such as an Application Specific Integrated Circuit (ASIC), for executing at least a part of various processes. The object detection device 41 may be configured as 1 or more processors operating according to a computer program, 1 or more dedicated hardware circuits such as an ASIC, or a processing circuit including a combination of these circuits. The processor includes a CPU, and memories such as RAM and ROM. The memory stores program codes or instructions configured to cause the CPU to execute processing. Memory, i.e., computer-readable media, includes all memory accessible by a general-purpose or special-purpose computer.

The object detection device 41 CAN transmit and receive data to and from the main Controller 20 by performing communication according to a communication protocol for a vehicle, such as CAN (Controller Area Network) or LIN (Local Interconnect Network).

The object detection process performed by the object detection device 41 will be described below. The object detection process is repeated at a predetermined control cycle when the forklift 10 is in the activated state and when the forklift 10 is retracted. The activated state is a state in which the forklift 10 can perform a traveling operation and a cargo handling operation. Whether the forklift 10 is retracted or not can be grasped from the detection result of the orientation sensor 25. The main controller 20 may cause the object detection device 41 to perform the object detection process by giving a command to the object detection device 41 when the forklift 10 is retreated. The object detection device 41 may determine whether or not to perform the object detection process by itself by acquiring the detection result of the orientation sensor 25 from the main controller 20.

In the following description, an object detection process in a case where the environment shown in fig. 3 is photographed by the stereo camera 31 will be described as an example. Fig. 3 is a1 st image I1 obtained by capturing the back of the forklift 10. As can be understood from the 1 st image I1, a person or an object A, B, C, D, E other than a person is present behind the forklift 10. Note that, for convenience of explanation, the coordinates on the 1 st image I1 where the object A, B, C, D, E exists are indicated by a frame, but no frame exists in the actual 1 st image I1.

As shown in fig. 4, in step S1, the object detection device 41 acquires a parallax image. The parallax image is an image obtained by associating parallax [ px ] with respect to pixels. The parallax is obtained by comparing the 1 st image I1 with the 2 nd image and calculating the difference between the number of pixels of the 1 st image I1 and the 2 nd image for the same feature point appearing in each image. The feature point is a portion that can be recognized as a boundary line, such as an edge of an object. The feature point may be detected from luminance information or the like.

The object detection device 41 acquires the 1 st image I1 and the 2 nd image of the same frame from the video captured by the stereo camera 31. The object detection device 41 performs conversion from RGB to YCrCb using a RAM that temporarily stores each image. The object detection device 41 may perform image processing such as strain correction and edge enhancement. The object detection device 41 performs stereo processing for comparing the similarity between each pixel of the 1 st image I1 and each pixel of the 2 nd image to calculate parallax. As the stereoscopic processing, a method of calculating a parallax for each pixel may be used, or a block matching method of dividing each image into blocks each including a plurality of pixels and calculating a parallax for each block may be used. The object detection device 41 acquires a parallax image by using the 1 st image I1 as a reference image and the 2 nd image as a comparative image. The object detection device 41 extracts the most similar pixel of the 2 nd image for each pixel of the 1 st image I1, and calculates the difference between the number of pixels in the horizontal direction of the 1 st image I1 and the most similar pixel as the parallax. Thus, a parallax image in which the parallax is associated with each pixel of the 1 st image I1 as the reference image can be acquired. The parallax image does not necessarily need to be displayed, and represents data associating parallax with each pixel in the parallax image. Further, the object detection device 41 may perform a process of removing the parallax of the road surface from the parallax image. By performing the process of step S1, the object detection device 41 corresponds to the parallax image acquisition unit.

Next, in step S2, the object detection device 41 derives the coordinates of the feature point in the world coordinate system. First, the object detection device 41 derives the coordinates of the feature point in the camera coordinate system. The camera coordinate system is a 3-axis orthogonal coordinate system in which the optical axis is defined as the Z axis, and 2 axes orthogonal to the optical axis are defined as the X axis and the Y axis, respectively. The coordinates of the feature point in the camera coordinate system may be represented by a Z coordinate Zc, an X coordinate Xc, and a Y coordinate Yc in the camera coordinate system. The Z-coordinate Zc, the X-coordinate Xc, and the Y-coordinate Yc can be derived using the following expressions (1) to (3), respectively.

Figure BDA0002421376660000052

In the formulae (1) to (3), B is a base length [ mm ], f is a focal length [ mm ], and d is a parallax [ px ]. xp is an arbitrary X coordinate in the parallax image, and X' is an X coordinate of the center coordinate of the parallax image. yp is an arbitrary Y coordinate in the parallax image, and Y' is a Y coordinate of the center coordinate of the parallax image.

The coordinates of the feature points in the camera coordinate system are derived by defining xp as the X-coordinates of the feature points in the parallax image, yp as the Y-coordinates of the feature points in the parallax image, and d as the parallax associated with the coordinates of the feature points.

Here, a 3-axis orthogonal coordinate system in which an axis extending in the traveling direction of the forklift 10 is a Y axis, an axis extending in the vertical direction is a Z axis, and an axis orthogonal to the Y axis and the Z axis is an X axis is defined as a world coordinate system which is a three-dimensional coordinate system in real space. The coordinates of the feature points of the world coordinate system may be represented by an X coordinate Xw, a Y coordinate Yw, and a Z coordinate Zw in the world coordinate system.

The object detection device 41 performs world coordinate conversion for converting camera coordinates into world coordinates using the following equation (4).

Here, H in equation (4) is the installation height [ mm ] of the stereo camera 31 in the world coordinate system, and θ is the angle formed by the optical axes of the cameras 32 and 33 and the horizontal plane +90 °.

The X coordinate Xw in the world coordinates obtained by the world coordinate conversion represents the distance from the forklift 10 to the feature point in the left-right direction of the forklift 10. The Y-coordinate Yw represents a distance from the forklift 10 to the feature point in the traveling direction of the forklift 10. The Z-coordinate Zw represents the height from the road surface to the feature point.

Next, in step S3, the object detection device 41 derives the X-coordinate Xw and the Y-coordinate Yw of the object A, B, C, D, E indicating the XY plane which is the coordinate plane of the horizontal plane in the world coordinate system. The X-coordinate Xw and the Y-coordinate Yw of the object A, B, C, D, E in the XY plane may be derived using various methods. For example, the object detection device 41 performs clustering in which feature points within a predetermined range are regarded as 1 point group from the world coordinates of the feature points derived in step S2. The object detection device 41 regards the cluster of points that has been clustered as 1 object A, B, C, D, E. The object detection device 41 grasps the X coordinate Xw, the Y coordinate Yw, and the Z coordinate Zw of the object A, B, C, D, E from the X coordinate Xw, the Y coordinate Yw, and the Z coordinate Zw of the feature points constituting the clustered point group. For example, the X, Y, and Z coordinates Xw, Yw, and Zw of a plurality of feature points located at the end of the clustered point group may be set as the X, Y, and Z coordinates Xw, Yw, and Zw of the object A, B, C, D, E, or the X, Y, and Z coordinates Xw, Yw, and Zw of a feature point that is the center of the point group may be set as the X, Y, and Z coordinates Xw, Yw, and Zw of the object A, B, C, D, E. As shown in fig. 5, the object detection device 41 projects the X-coordinate Xw, the Y-coordinate Yw, and the Z-coordinate Zw of the object A, B, C, D, E onto the XY plane of the world coordinate system, thereby deriving the X-coordinate Xw and the Y-coordinate Yw of the object A, B, C, D, E in the XY plane of the world coordinate system. That is, the object detection device 41 derives the X-coordinate Xw and the Y-coordinate Yw of the object A, B, C, D, E in the horizontal direction by removing the Z-coordinate Zw from the X-coordinate Xw, the Y-coordinate Yw, and the Z-coordinate Zw of the object A, B, C, D, E. By performing the process of step S3, the object detection device 41 corresponds to the coordinate derivation unit.

Next, as shown in fig. 4, 6, and 7, in step S4, the object detection device 41 derives the planned travel area AP of the forklift 10. The planned travel range AP is a range in which the forklift 10 is predicted to pass when the forklift 10 continues traveling at the steering angle θ 1 at the time point when the processing of step S4 is performed. The object detection device 41 acquires the detection result of the steering angle sensor 26 from the main controller 20. The object detection device 41 acquires the detection result of the vehicle speed sensor 24 from the main controller 20. The detection result of the steering angle sensor 26 is steering angle information. The detection result of the vehicle speed sensor 24 is speed information. Therefore, the object detection device 41 corresponds to a steering angle acquisition unit and a speed acquisition unit.

The planned travel area AP of the forklift 10 is derived from the steering angle θ 1 and the size information of the forklift 10. First, the object detection device 41 derives the planned travel paths Rrr, Rrl from the steering angle θ 1 and the size information of the forklift 10. The dimension information of the forklift 10 includes a dimension L1[ mm ], a wheel base L2[ mm ], and a vehicle width W [ mm ] from the center axes of the drive wheels 12, 13 to the rear end of the vehicle body 11. The size information of the forklift 10 is stored in the memory 22 of the main controller 20 or the storage unit 43 of the object detection device 41. When the memory 22 stores the size information of the forklift 10, the object detection device 41 acquires the size information of the forklift 10 from the main controller 20.

As can be understood from fig. 6, the predetermined travel paths Rrr, Rrl are derived as the radius of gyration from the center of gyration Pr. The predetermined travel paths Rrr, Rrl are derived individually from the predetermined travel path Rrl at the left end of the vehicle body 11 and the predetermined travel path Rrr at the right end of the vehicle body 11. The rotation center Pr is an intersection of a virtual line segment L3 extending the center axes of the drive wheels 12 and 13 and a virtual line segment L4 extending the center axis of the steered wheel 14. The object detection device 41 derives the predetermined travel paths Rrr and Rrl from the following equations (5) and (6).

Figure BDA0002421376660000072

The X-coordinate Xw and the Y-coordinate Yw in the world coordinate system of the predetermined paths Rrr, Rrl may be derived based on the predetermined paths Rrr, Rrl of travel derived from equations (5) and (6) and the gyration angle Φ from the line segment L3. Further, the origin O of the XY plane is the center position of the drive wheels 12, 13 with respect to each other. The turning angle Φ is an angle formed by a line segment L3 of the current position when the forklift 10 moves from the current position and a line segment L3 of the destination position while maintaining the steering angle θ 1. The turn angle Φ is used to determine how far the planned travel paths Rrr, Rrl are to be derived from the current position to the destination. That is, the length of the planned travel paths Rrr, Rrl in the travel direction of the forklift 10 is determined by the pivot angle Φ.

As shown in fig. 7, the object detection device 41 defines a region in which a2 nd region a2 located further to the outside in the vehicle width direction of the forklift 10 than the scheduled traveling paths Rrr, Rrl is added to a1 st region a1 between the scheduled traveling paths Rrr, Rrl as the width of the scheduled traveling region AP. The 2 nd region a2 on the outer side in the vehicle width direction than the scheduled travel paths Rrr, Rrl is a margin (margin). The outside of the forklift 10 in the planned travel area AP in the vehicle width direction is not the vehicle width direction based on the current position of the forklift 10, but is the outside in the vehicle width direction based on the position of the forklift 10 when it is located on the planned travel paths Rrr, Rrl. The length of the travel-scheduled paths Rrr, Rrl in the travel direction of the forklift 10 is the same as the length of the travel-scheduled area AP in the travel direction of the forklift 10. Therefore, the length of the planned travel region AP in the travel direction of the forklift 10 is determined by the turning angle Φ.

In the present embodiment, the turning angle Φ and the width of the planned travel region AP are changed in accordance with the speed of the forklift 10. The object detection device 41 has a larger rotation angle Φ as the speed of the forklift 10 increases. The storage unit 43 of the object detection device 41 stores a mapping table or a relational expression that relates the relationship between the speed of the forklift 10 and the turning angle Φ, and the object detection device 41 derives the turning angle Φ from the speed of the forklift 10. The map or the relational expression is set to derive a value larger than a predicted value of the turning angle when the forklift 10 travels within the control cycle at the time point when the processing of step S4 is performed, for example. When the turning angle Φ becomes larger, the planned travel region AP becomes longer in the traveling direction of the forklift 10.

The slower the speed of the forklift 10 is, the wider the 2 nd area a2 becomes by the object detection device 41. The storage unit 43 of the object detection device 41 stores a map or a relational expression that relates the speed of the forklift 10 to the 2 nd area a2, and the object detection device 41 derives the width of the 2 nd area a2 from the speed of the forklift 10. When the 2 nd area a2 is widened, the planned travel area AP is widened in the vehicle width direction of the forklift 10. By performing the process of step S4, the object detection device 41 corresponds to the planned travel area derivation section.

Next, as shown in fig. 4, in step S5, the object detection device 41 extracts the priority object A, B, C as the object A, B, C located on the travel predetermined area AP. The X-coordinate Xw and the Y-coordinate Yw of the object A, B, C, D, E in the world coordinate system are derived in step S3, and the X-coordinate Xw and the Y-coordinate Yw of the travel-scheduled region AP in the world coordinate system are derived in step S4. Therefore, the object detection device 41 can grasp the positional relationship between the two.

Fig. 7 shows the relationship between the coordinates of the object A, B, C, D, E on the XY plane of the world coordinate system and the coordinates of the travel-scheduled area AP. As can be grasped from fig. 7, it can be grasped that the priority object A, B, C is located on the travel-scheduled area AP. The phrase "the object is located on the planned travel area AP" means that at least a part of the object is located on the same coordinates as the planned travel area AP on the XY plane in the world coordinate system. By performing the processing of step S5, the object detection device 41 corresponds to the extraction unit.

Next, as shown in fig. 4, in step S6, the object detection device 41 sets a priority to the object A, B, C, D, E. The object detection device 41 derives the distance from the forklift 10 to the object A, B, C, D, E. The distance from the forklift 10 to the object A, B, C, D, E is the euclidean distance from the origin O to the coordinates of the object A, B, C, D, E. The object detection device 41 increases the priority in the order of the distance from the forklift 10 among the priority objects A, B, C. The object detection device 41 sets the priority of the priority object B to the highest priority, the priority object C to the 2 nd priority, and the priority object a to the 3 rd priority. In addition, the priority may be set for the object D, E different from the priority object A, B, C, or the priority may not be set. When the priority is set, the priority may be increased in the order of the distance from the forklift 10, or the priority may be increased in the order of the approach to the planned travel area AP. That is, the processing for objects other than the priority object A, B, C is arbitrary.

Next, in step S7, the object detection device 41 performs human detection processing for determining whether or not the object A, B, C, D, E is a human. The object detection device 41 determines whether or not the priority object B closest to the forklift 10 is most preferred to be a human. Then, the object detection device 41 performs human detection processing on the other object A, C, D, E in the control cycle. The object detection device 41 performs the human detection processing in the order of the priority object B → the priority object C → the priority object a based on the priority order. Further, if within the control period, the human detection processing may be performed on the object D, E other than the priority object A, B, C. When the control cycle passes, the object detection device 41 ends the object detection process. Even when the object detection device 41 does not perform the human detection process on the object A, C, D, E other than the priority object B closest to the forklift 10, the object detection process ends with the elapse of the control cycle.

As described above, the priority object A, B, C makes the determination of whether or not a person is prioritized over the object D, E that is different from the priority object A, B, C. The "priority" includes a mode in which whether or not the priority object A, B, C is a human being is determined before the object D, E different from the priority object A, B, C.

The determination as to whether or not the object A, B, C, D, E is a human can be performed by the following processing. First, the object detection device 41 converts the world coordinates of the object A, B, C, D, E into camera coordinates. The conversion from world coordinates to camera coordinates can be performed using the following equation (7).

The camera coordinates of the object A, B, C, D, E can be derived by setting the X coordinate Xw, the Y coordinate Yw, and the Z coordinate Zw of equation (7) as the world coordinates of the object A, B, C, D, E. In the present embodiment, the world coordinates of the object A, B, C, D, E are set to the coordinates of the XY plane, and therefore the Z coordinate Zw is 0.

Next, the object detection device 41 derives the coordinates of the object A, B, C, D, E in the 1 st image I1 from the camera coordinates using the following equations (8) and (9).

Figure BDA0002421376660000092

By setting the X coordinate Xc, the Y coordinate Yc, and the Z coordinate Zc of equations (8) and (9) as the camera coordinates of the object A, B, C, D, E, the coordinates on the 1 st image I1 of the object A, B, C, D, E can be derived.

The object detection device 41 determines whether the object A, B, C, D, E is a human by performing human detection processing on the coordinates of the object A, B, C, D, E on the 1 st image I1. The coordinates of the object A, B, C, D, E in the 1 st image I1 may include coordinates around the coordinates, in addition to the coordinates derived from the expressions (8) and (9). The human detection processing is performed by a Feature extraction method of extracting Feature amounts from the 1 st image I1, for example, using HOG (histogram of oriented gradients) or SIFT (Scale Invariant Feature Transform). Thus, the object detection device 41 can determine whether the object A, B, C, D, E is a human or an object other than a human. Since the positional relationship between the forklift 10 and the object A, B, C, D, E is derived in step S3, the object detection device 41 can grasp the positional relationship between the forklift 10 and the object A, B, C, D, E. By performing the process of step S7, the object detection device 41 corresponds to the human determination unit.

As described above, the object detection device 41 includes, as functional elements, a parallax image acquisition unit, a coordinate derivation unit, a steering angle acquisition unit, a planned travel region derivation unit, an extraction unit, a person determination unit, and a speed acquisition unit.

The operation of the present embodiment will be described.

The object detection device 41 derives the planned travel area AP of the forklift 10 in the world coordinate system from the steering angle information and the size information of the forklift 10. Among the objects A, B, C, D, E, the object that becomes an obstacle to travel of the forklift 10 is the priority object A, B, C located on the travel-scheduled area AP. By extracting the priority object A, B, C located on the scheduled travel area AP using the world coordinate system and performing the person detection processing on the coordinates of the priority object A, B, C on the 1 st image I1 with priority over the coordinates of the object D, E, a person that is a travel obstacle can be quickly detected.

In the forklift 10, when the object A, B, C, D, E that has been detected is a person, a process different from that in the case where the object A, B, C, D, E is other than a person may be performed. For example, when a person is detected by the monitoring device 30, the main controller 20 reports to the passenger that the person is nearby. The report is made by any method such as a display for displaying the report or a buzzer for giving the report by voice. The main controller 20 may also report to a person around the forklift 10 to recognize that the forklift 10 is located nearby.

Here, the object detection process is repeatedly performed for each predetermined control cycle. The processing load of the human detection processing for determining whether or not the object A, B, C, D, E is a human in the object detection processing is large. The processing load becomes large when the person detection processing is performed on all the areas of the 1 st image I1. The forklift 10 performs rapid turning more frequently than a passenger car, and often uses a stereo camera 31 having a wider angle of view than a stereo camera mounted on the passenger car. Therefore, the area in which the human detection process is performed by the object detection device 41 mounted on the forklift 10 is particularly likely to increase, and the processing load is likely to increase.

When the processing load is large, the human detection processing may not be performed for all the objects A, B, C, D, E in the control cycle. If the person detection processing is performed sequentially from the object approaching the forklift 10 without considering the planned travel area AP, the person detection processing can be performed on an object that does not become a travel obstacle before the object that becomes a travel obstacle. Then, there are the following cases: the object that becomes the travel obstacle cannot be subjected to the person detection processing within the control period, the person that becomes the travel obstacle cannot be detected, or the detection of the person that becomes the travel obstacle becomes slow. When the human detection processing is performed on all the objects on the 1 st image I1 while maintaining the control cycle, the object detection device 41 having a high processing capability must be used, resulting in an increase in manufacturing cost. Further, when the human detection processing is performed on all the objects within the control cycle while maintaining the processing capability of the object detection device 41, the control cycle must be made longer, and the detection of a person that is a travel obstacle must be made slower.

In the present embodiment, it is determined with priority whether or not the priority object A, B, C is a human. Further, among the priority objects A, B, C, the priority object B closest to the forklift 10 is determined to be a human or not with the highest priority. Whether or not the priority object B closest to the forklift 10 is a human must be determined within the control cycle. The priority object B closest to the forklift 10 is the one that is most likely to be the travel obstacle among the priority objects A, B, C located on the travel scheduled area AP. In contrast, the priority object A, C different from the priority object B closest to the forklift 10 among the priority objects A, B, C may be the priority object closest to the forklift 10 after passing through the priority object B by the travel of the forklift 10, in addition to being farther from the forklift 10 than the closest priority object B. Therefore, when the priority object A, C hinders the travel of the forklift 10, the human detection process is performed in the next and subsequent control cycles, and the priority of the human detection process is lower than that of the closest priority object B. In addition, the object D, E not located in the scheduled travel area AP is less likely to become a travel obstacle than the priority object A, C. If the object D, E is located at a position where the travel of the forklift 10 or the object D, E is hindered by a change in the steering angle θ 1 of the forklift 10, the object D, E becomes a priority object in the next and subsequent control cycles. Therefore, the object D, E not located in the scheduled travel area AP has a low priority for human detection processing. In this way, in the object detection process repeated in each control cycle, if it can be determined at least whether or not the priority object B closest to the forklift 10 is a human, it is considered that there is no practical problem. Further, if it is determined whether or not the priority object A, C different from the priority object B closest to the forklift 10 is a human being within the control cycle, the forklift 10 can be operated more appropriately.

In the object detection device 41, since it is only necessary to determine whether or not the priority object B closest to the forklift 10 is a human, the processing load becomes smaller than in the case where the human detection processing is performed on all the objects A, B, C, D, E. Further, the control cycle is set to be able to determine whether or not the priority object B closest to the forklift 10 is a human, and therefore, the control cycle can be suppressed from becoming longer than in the case where the human detection processing is performed on all the objects A, B, C, D, E.

The effects of the present embodiment will be described.

(1) The object detection device 41 determines whether or not the priority object A, B, C is a person, preferentially over the object D, E different from the priority object A, B, C. Therefore, a person who is a travel obstacle can be detected quickly. In addition, the image captured by the stereo camera 31 is used for the human detection processing. It is assumed that, in the case where the person detection processing is performed from the image captured by the single lens reflex camera, after the person detection processing is performed on all the areas overlapping with the travel scheduled area AP in the image, the coordinates on the actual space of the area where the person is detected are derived. The area in which the person detection processing is performed is large, and the processing load of the object detection device 41 becomes large. In contrast, when an image captured by the stereo camera 31 is used, the coordinates on the 1 st image I1 of the object A, B, C, D, E can be derived before the person detection processing is performed. The area where the person detection processing is performed becomes smaller than the case where an image captured by a single lens reflex camera is used. By using the stereo camera 31, the processing load of the object detection device 41 can be reduced as compared with the case of using a single lens reflex camera. In addition, when the single lens reflex camera is used, the position of the person reflected by the single lens reflex camera must be grasped in advance from the attachment position of the single lens reflex camera or the lens. By using the stereo camera 31, it is not necessary to grasp the position of the person reflected in advance.

(2) The determination as to whether or not the priority object B closest to the forklift 10 among the priority objects A, B, C is the human is performed with the highest priority. It is preferable that the closer the person approaching the forklift 10 among the persons appearing in the 1 st image I1 is detected more quickly. By giving the priority to the priority object B closest to the forklift 10 among the priority objects A, B, C, it is possible to quickly detect a person that is a travel obstacle and is closest to the forklift 10.

(3) In the object detection device 41, the higher the speed of the forklift 10, the longer the planned travel area AP becomes. The faster the speed of the forklift 10 is, the shorter the distance between the forklift 10 and the person becomes, and the forklift 10 approaches the person in 1 control cycle. The person can be detected before the separation distance between the forklift 10 and the person becomes too short, as the travel planned area AP becomes longer as the speed of the forklift 10 becomes higher.

(4) In the object detection device 41, the travel-scheduled region AP is made wider in the vehicle width direction as the speed of the forklift 10 is lower. The slower the speed of the forklift 10 is, the easier the forklift 10 turns around, and the more easily the forklift moves to a position away from the predetermined travel paths Rrr, Rrl in the next and subsequent control cycles due to the change in the travel direction. The slower the speed of the forklift 10 is, the larger the planned travel area AP is in the vehicle width direction, and the detection of a person can be performed based on the speed of the forklift 10. Specifically, when the traveling direction of the forklift 10 is likely to change due to a change in the steering angle θ 1, the planned traveling area AP is enlarged in the vehicle width direction, so that a person located at a position where the forklift 10 is likely to travel can be quickly detected.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种多基站雷达构型下的距离扩展目标检测方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!