Real-time trailer coupling positioning and tracking

文档序号:862622 发布日期:2021-03-16 浏览:10次 中文

阅读说明:本技术 实时拖车联接器定位和跟踪 (Real-time trailer coupling positioning and tracking ) 是由 E·J·拉米雷斯拉诺斯 于 2019-05-01 设计创作,主要内容包括:提供了一种用于检测和定位拖车(200)的拖车联接器(212)的方法。所述方法包括从位于牵引车辆(100)的后部部分上的相机(142)接收图像(143),并且确定所述图像(143)内的感兴趣区域(300)。所述感兴趣区域(300)包括拖车联接器(212)的表示。所述方法包括确定相机平面(310)和道路平面(320)。另外,所述方法包括确定表示所述感兴趣区域(300)内部以及相机平面(310)和道路平面(320)内的对象的三维点云。所述方法还包括从传感器系统(140)接收传感器数据,并且基于3D点云和传感器数据来确定拖车联接器的联接器位置。所述方法还包括向驾驶系统(110)发送指令,从而使牵引车辆(100)沿着后向方向上的路径朝向所述联接器位置自主地驾驶。(A method for detecting and locating a trailer coupling (212) of a trailer (200) is provided. The method includes receiving an image (143) from a camera (142) located on a rear portion of a towing vehicle (100) and determining a region of interest (300) within the image (143). The region of interest (300) includes a representation of a trailer coupling (212). The method comprises determining a camera plane (310) and a road plane (320). Additionally, the method includes determining a three-dimensional point cloud representing objects inside the region of interest (300) and within a camera plane (310) and a road plane (320). The method also includes receiving sensor data from a sensor system (140) and determining a coupler position of the trailer coupler based on the 3D point cloud and the sensor data. The method further includes sending instructions to a steering system (110) to autonomously steer the towing vehicle (100) toward the coupler position along a path in a rearward direction.)

1. A method for detecting and positioning a trailer coupling of a trailer, the method comprising:

receiving, at data processing hardware, an image from a camera located on a rear portion of the towing vehicle and in communication with the data processing hardware;

determining, by data processing hardware, a region of interest within the image, the region of interest comprising a representation of a trailer coupling;

determining, by the data processing hardware, a camera plane in which the camera is moving based on the received image;

determining, by data processing hardware, a road plane based on the received image;

determining, by data processing hardware, a three-dimensional (3D) point cloud representing objects inside the region of interest and within a camera plane and a road plane;

receiving, at data processing hardware, sensor data from at least one of a wheel encoder, acceleration and wheel angle sensors, and an inertial measurement unit in communication with the data processing hardware;

determining, at data processing hardware, a coupler position of a trailer coupler based on the 3D point cloud and the sensor data, the coupler position being in real-world coordinates; and

instructions are sent from the data processing hardware to a steering system to autonomously steer the towing vehicle along a path in a rearward direction toward the coupler position.

2. The method of claim 1, wherein determining the region of interest within the image comprises:

sending instructions from the data processing hardware to the display to display the received image; and

receiving, at data processing hardware, a user selection of the region of interest.

3. The method of claim 1, further comprising:

projecting, by data processing hardware, points associated with the 3D point cloud onto a camera plane or a road plane;

determining, by the data processing hardware, the distance between each point and the camera:

determining a distance between each point and a camera center when points associated with the 3D point cloud are projected onto the camera plane; and

determining a distance between each point and a projection of a camera center on the road plane when points associated with the 3D point cloud are projected onto the road plane; and

determining, by data processing hardware, a shortest distance based on the determined distance, a projection of a 3D point associated with the shortest distance on the received image representing a coupler pixel location within the image, wherein the coupler location is based on a coupler pixel location.

4. The method of claim 3, further comprising:

determining, by data processing hardware, a coupler height based on a distance between a 3D point associated with the shortest distance and a road plane, wherein the coupler position comprises a coupler height.

5. The method of claim 1, further comprising:

determining, by data processing hardware, a first distance between the trailer coupler and the camera based on the 3D point cloud; and

determining, by the data processing hardware, a second distance between the trailer coupler and the vehicle hitch ball based on the first distance minus a longitudinal distance between the camera and the vehicle hitch ball;

wherein the path is based on the second distance.

6. The method of claim 1, wherein determining the point cloud of the region of interest comprises: one of a Visual Odometry (VO) algorithm, a simultaneous localization and mapping (SLAM) algorithm, and a motion recovery structure (SfM) algorithm is performed.

7. The method of claim 1, wherein determining a camera plane comprises:

determining, by data processing hardware, at least three-dimensional positions of a rear camera from the received images; and

determining, by data processing hardware, a camera plane based on the at least three-dimensional positions.

8. The method of claim 1, wherein determining a road plane comprises:

determining the height of the camera from the road supporting the towing vehicle; and

the camera plane is displaced by the height of the camera.

9. The method of claim 1, wherein determining a road plane comprises:

extracting, by data processing hardware, at least three feature points including a road from the image;

associating, by data processing hardware, a point in the 3D point cloud with each feature point; and

determining, by data processing hardware, a road plane based on at least three points in the 3D point cloud associated with the at least three feature points.

10. The method of claim 9, wherein determining a camera plane comprises:

determining, by data processing hardware, a height of a camera from the road; and

the road plane is shifted by the height of the camera by the data processing hardware.

11. A system for inspecting and positioning a trailer coupling of a trailer, the system comprising:

data processing hardware; and

memory hardware in communication with the data processing hardware, the memory hardware storing instructions that, when executed on the data processing hardware, cause the data processing hardware to perform operations comprising:

receiving one or more images from a camera located on a rear portion of the towing vehicle and in communication with the data processing hardware;

determining a region of interest within the image, the region of interest comprising a representation of a trailer coupling;

determining a camera plane in which the camera moves based on the received image;

determining a road plane based on the received image;

determining a three-dimensional (3D) point cloud representing objects inside the region of interest and within a camera plane and a road plane;

receiving sensor data from at least one of a wheel encoder, acceleration and wheel angle sensors, and an inertial measurement unit in communication with data processing hardware;

determining a coupler position of a trailer coupler based on the 3D point cloud and the sensor data, the coupler position being in real world coordinates; and

sending a command to a steering system to autonomously steer the towing vehicle toward the coupler position along a path in a rearward direction.

12. The system of claim 11, wherein determining the region of interest within the image comprises:

sending an instruction to a display, thereby causing the received image to be displayed; and

receiving a user selection of the region of interest.

13. The system of claim 11, wherein the operations further comprise:

projecting points associated with the 3D point cloud onto a camera plane or a road plane;

determining the distance between each point and the camera:

determining a distance between each point and a camera center when points associated with the 3D point cloud are projected onto the camera plane; and

determining a distance between each point and a projection of a camera center on the road plane when points associated with the 3D point cloud are projected onto the road plane; and

determining a shortest distance based on the determined distances, a projection of a 3D point associated with the shortest distance on the received image representing a coupler pixel position within the image, wherein the coupler position is based on a coupler pixel position.

14. The system of claim 13, wherein the operations further comprise:

determining a coupler height based on a distance between a 3D point associated with the shortest distance and a road plane, wherein the coupler position comprises a coupler height.

15. The system of claim 11, wherein the operations further comprise:

determining a first distance between the trailer coupler and the camera based on the 3D point cloud; and

determining a second distance between the trailer coupler and the vehicle hitch ball based on the first distance minus a longitudinal distance between the camera and the vehicle hitch ball;

wherein the path is based on the second distance.

16. The system of claim 11, wherein determining a 3D point cloud of the region of interest comprises: one of a Visual Odometry (VO) algorithm, a simultaneous localization and mapping (SLAM) algorithm, and a motion recovery structure (SfM) algorithm is performed.

17. The system of claim 11, wherein determining a camera plane comprises:

determining, by data processing hardware, at least three-dimensional positions of a rear camera from the received images; and

determining, by data processing hardware, a camera plane based on the at least three-dimensional positions.

18. The system of claim 11, wherein determining a road plane comprises:

determining the height of the camera from the road supporting the towing vehicle; and

the camera plane is displaced by the height of the camera.

19. The system of claim 11, wherein determining a road plane comprises:

extracting at least three feature points including a road from the image;

associating a point in the 3D point cloud with each feature point; and

determining a road plane based on at least three points in the 3D point cloud associated with the at least three feature points.

20. The system of claim 19, wherein determining a camera plane comprises:

determining a height of a camera from the road; and

the road plane is displaced by the height of the camera.

Technical Field

The present disclosure relates to a method and apparatus for real-time coupler positioning and tracking.

Background

Trailers (trailers) are typically unpowered vehicles that are pulled by powered towing vehicles. The trailer may be, among other things, a utility trailer, a pop-up camper, a travel trailer, a livestock trailer, a flatbed trailer, a closed car truck, and a marine trailer. The towing vehicle may be an automobile, a cross-over vehicle, a truck, a van, a Sport Utility Vehicle (SUV), a Recreational Vehicle (RV), or any other vehicle configured to attach to and pull a trailer. A trailer hitch (hitch) may be used to attach the trailer to the powered vehicle. The receiver hitch is mounted on the towing vehicle and connected to the trailer hitch to form a connection. The trailer hitch may be a ball and socket joint, fifth wheel and gooseneck (goose) or trailer jack. Other attachment mechanisms may also be used. In addition to the mechanical connection between the trailer and the powered vehicle, in some examples, the trailer is also electrically connected to the towing vehicle. Thus, the electrical connection allows the trailer to be fed from the rear light circuit of the powered vehicle, allowing the trailer to have the tail lights, turn signals, and brake lights synchronized with the lights of the powered vehicle.

Recent advances in sensor technology have resulted in improved safety systems for vehicles. Thus, it is desirable to provide a system that is capable of identifying and locating the coupler of a trailer located behind a towing vehicle in real time, thereby autonomously maneuvering the towing vehicle towards the trailer for automatic hitching (hitching).

Disclosure of Invention

One aspect of the present disclosure provides a method for detecting and positioning a trailer coupler of a trailer. The method comprises the following steps: the image is received at the data processing hardware from a camera located on a rear portion of the towing vehicle and in communication with the data processing hardware. The method further comprises the following steps: a region of interest within the image is determined by the data processing hardware. The region of interest includes a representation of a trailer coupling. The method further comprises the following steps: a camera plane in which the camera is moving is determined by the data processing hardware based on the received images. In addition, the method comprises: the road plane is determined by the data processing hardware based on the received image. The method further comprises the following steps: a three-dimensional (3D) point cloud representing objects inside the region of interest and within the camera plane and the road plane is determined by data processing hardware. The method comprises the following steps: the sensor data is received at the data processing hardware from at least one of a wheel encoder, acceleration and wheel angle sensors, and an inertial measurement unit in communication with the data processing hardware. The method comprises the following steps: a coupler position of the trailer coupler is determined at the data processing hardware based on the 3d point cloud and the sensor data. The coupler position is in real world coordinates. Additionally, the method comprises: instructions are sent from the data processing hardware to the driving system to autonomously drive the towing vehicle along a path in a rearward direction toward the coupler position.

Implementations of the disclosure may include one or more of the following optional features. In some implementations, determining a region of interest within an image includes: sending instructions from the data processing hardware to the display to display the received image; and receiving, at the data processing hardware, a user selection of a region of interest.

In some examples, the method further comprises: the points associated with the 3D point cloud are projected by data processing hardware onto a camera plane or road plane. The method can comprise the following steps: the distance between each point and the camera is determined by the data processing hardware. When points associated with the 3D point cloud are projected onto the camera plane, the method includes determining a distance between each point and the center of the camera. When points associated with the 3D point cloud are projected onto the road plane, the method includes determining a distance between each point and a projection of the camera center onto the road plane. The method may further comprise: determining, by the data processing hardware, a shortest distance based on the determined distances, wherein a projection of the 3D point associated with the shortest distance on the received image represents a coupler pixel location within the image. The coupler position is based on the coupler pixel position.

In some examples, the method further comprises: determining, by data processing hardware, a coupler height based on a distance between the 3D point associated with the shortest distance and the road plane. The coupling position includes a coupling height.

The method may further comprise: determining, by data processing hardware, a first distance between the trailer coupler and the camera based on the 3D point cloud; and determining, by the data processing hardware, a second distance between the trailer coupling and the vehicle hitch ball based on the first distance minus a longitudinal distance between the camera and the vehicle hitch ball. The path is based on the second distance.

In some implementations, determining a point cloud of a region of interest includes: one of a Visual Odometry (VO) algorithm, a simultaneous localization and mapping (SLAM) algorithm, and a structure from motion (SfM) algorithm is performed.

Determining the camera plane may include: determining, by data processing hardware, at least three-dimensional positions of a rear camera from the received images; and determining, by data processing hardware, a camera plane based on the at least three-dimensional positions. In some examples, determining the road plane includes: determining the height of the camera from the road supporting the towing vehicle; and shifting the camera plane by the height of the camera.

In some implementations, determining the road plane includes: extracting, by data processing hardware, at least three feature points including a road from the image; and associating, by data processing hardware, a point in the 3D point cloud with each feature point. Additionally, determining the road plane may include: determining, by data processing hardware, a road plane based on at least three points in the 3D point cloud associated with the at least three feature points. In some examples, determining the camera plane includes: determining, by data processing hardware, a height of the camera from the road; and shifting the road plane by the height of the camera by the data processing hardware.

Another aspect of the present disclosure provides a system for detecting and locating a trailer coupler of a trailer. The system comprises: data processing hardware; and memory hardware in communication with the data processing hardware. The memory hardware stores instructions that, when executed on the data processing hardware, cause the data processing hardware to perform operations comprising the methods described above.

The details of one or more implementations of the disclosure are set forth in the accompanying drawings and the description below. Other aspects, features, and advantages will be apparent from the description and drawings, and from the claims.

Drawings

FIG. 1 is a schematic top view of an exemplary towing vehicle positioned in front of a trailer.

FIG. 2 is a schematic illustration of the exemplary towing vehicle shown in FIG. 1.

Fig. 3 is a schematic side view of the exemplary towing vehicle and selected trailer of fig. 1.

Fig. 4A is a perspective view of a towing vehicle and trailer showing a captured image and a region of interest.

Fig. 4B is a perspective view of a semi-dense or dense point cloud for a region of interest within a captured image.

Fig. 5A is a perspective view of the towing vehicle and trailer showing the captured image, the region of interest, and the minimal region of interest.

Fig. 5B is a perspective view of the towing vehicle and trailer showing the captured image, the region of interest, and the minimal region of interest.

FIG. 6 is a schematic illustration of an exemplary arrangement for detecting and locating the coupler of a trailer hitch associated with a trailer behind a towing vehicle.

Like reference symbols in the various drawings indicate like elements.

Detailed Description

Referring to fig. 1 and 2, a towing vehicle 100, such as, but not limited to, an automobile, a cross-over vehicle, a truck, a van, a Sport Utility Vehicle (SUV), and a Recreational Vehicle (RV), may be configured to hitch to a trailer 200 and tow the trailer 200. The towing vehicle 100 is connected to the trailer 200 by a towing vehicle hitch 120, the towing vehicle hitch 120 having a vehicle hitch ball 122 connected to a trailer hitch 210, the trailer hitch 210 having a trailer coupling 212. It is desirable to have a towing vehicle 100 capable of autonomously backing up toward a trailer 200, the trailer 200 being identified from one or more representations 136, 136a-c of the trailer 200, 200a-c displayed on a user interface 130, such as a user display 132. Additionally, it is also desirable to have a hitch position estimation and tracking system 160 supported by the towing vehicle 100 that is capable of executing algorithms that track and estimate the position of the hitch 212 associated with the trailer 200 in real time. Thus, the hitch position estimation and tracking system 160 automates the hitching process of the towing vehicle 100 to the trailer 200. The coupler position estimation and tracking system 160 may use a single camera 142a and at least one of the following sensors: wheel encoders 144, acceleration and wheel angle sensors 146, and Inertial Measurement Unit (IMU) 148 to determine the position of coupler 212 in pixel coordinates within image 143, and the coupler position in the three-dimensional (3D) world.

Referring to fig. 1-5, in some implementations, a driver of the towing vehicle 100 wants to tow a trailer 200 located behind the towing vehicle 100. The towing vehicle 100 may be configured to: an indication of a driver selection 134 associated with a representation of the selected trailer 200, 200a-c is received. In some examples, the driver maneuvers the towing vehicle 100 towards the selected trailer 200, 200a-c, while in other examples, the towing vehicle 100 is autonomously driven towards the selected trailer 200, 200 a-c. The towing vehicle 100 may include a steering system 110, which steering system 110 maneuvers the towing vehicle 100 across the road surface 10 based on, for example, steering commands having x, y, and z components. As shown, the steering system 110 includes a right front wheel 112, 112a, a left front wheel 112, 112b, a right rear wheel 112, 112c, and a left rear wheel 112, 112 d. Steering system 110 may also include other wheel configurations. The driving system 110 may further include: a braking system 114 including a brake associated with each wheel 112, 112 a-d; and an acceleration system 116 configured to adjust the speed and direction of the towing vehicle 100. Additionally, the steering system 110 may include a suspension system 118, the suspension system 118 including tires, tire air, springs, shock absorbers, and linkages connecting the towing vehicle 100 to its wheels 112, 112a-d associated with each wheel 112, 112a-d and allowing relative movement between the towing vehicle 100 and the wheels 112, 112 a-d. The suspension system 118 may be configured to adjust the height of the towing vehicle 100, allowing the towing vehicle hitch 120 (e.g., vehicle hitch ball 122) to align with the trailer hitch 210 (e.g., trailer hitch coupler 212), which allows for an autonomous connection between the towing vehicle 100 and the trailer 200.

The towing vehicle 100 can move across the road surface through various combinations of movement relative to three mutually perpendicular axes (lateral axis X, fore-aft axis Y, and central vertical axis Z) defined by the towing vehicle 100. The transverse axis X extends between the right and left sides of the towing vehicle 100. The forward driving direction along the fore-aft axis Y is designated as F, also referred to as forward motion. In addition, the rear or rearward driving direction along the forward-rearward direction Y is designated as R, also referred to as rearward motion. When the suspension system 118 adjusts the suspension of the towing vehicle 100, the towing vehicle 100 may tilt about the X and or Y axes, or move along the central vertical axis Z.

The towing vehicle 100 may include a user interface 130. The user interface 130 receives one or more user commands from the driver via one or more input mechanisms or an on-screen display 132 (e.g., a touch screen display) and/or displays one or more notifications to the driver. The user interface 130 communicates with a vehicle controller 150, which vehicle controller 150 in turn communicates with a sensor system 140. In some examples, the user interface 130 displays an image of the environment of the towing vehicle 100, causing the user interface 130 to receive one or more commands (from the driver) that initiate performance of one or more behaviors. In some examples, the user display 132 displays one or more representations 136, 136a-c of trailers 200, 200a-c located behind the towing vehicle 100. In this case, the driver selects the representation 136, 136a-c of the trailer 200, 200a-c, causing the controller 150 to execute the hitch position estimation and tracking system 160 associated with the trailer 200, 200a-c of the selected representation 136, 136 a-c. In some examples, where the user display 132 displays one representation 136, 136a-c of a trailer 200, 200a-c located behind the towing vehicle 100, the controller 150 may execute the coupler position estimation and tracking system 160 associated with one trailer 200, 200a-c of the one representation 136, 136a-c automatically, or upon an indication from the driver to autonomously attach to the trailer 200, 200 a-c. The vehicle controller 150 includes a computing device (or processor or data processing hardware) 152 (e.g., a central processing unit having one or more computing processors) in communication with a non-transitory memory 154 (e.g., hard disk, flash memory, random access memory, memory hardware), the non-transitory memory 154 capable of storing instructions executable on the computing processor(s) 152.

The towing vehicle 100 may include a sensor system 140 to provide reliable and robust driving. The sensor system 140 may include different types of sensors that may be used alone or in conjunction with one another to create a perception of the environment of the towing vehicle 100. The perception of the environment is used to assist the driver in making informed decisions based on objects and obstacles detected by the sensor system 140 or during autonomous driving of the towing vehicle 100. The sensor system 140 may include one or more cameras 142. In some implementations, the towing vehicle 100 includes a rear camera 142a, the rear camera 142a being mounted to provide an image 143, the image 143 having a view of a rearward driving path of the towing vehicle 100. The rear camera 142a may include a fisheye lens, including an ultra-wide angle lens that produces strong visual distortion intended to create a wide panoramic or hemispherical image 143. The fisheye camera captures an image 143 with a very wide angle view. Furthermore, the image 143 captured by the fisheye camera has a characteristic convex non-rectilinear appearance. Other types of cameras may also be used to capture images 143 of the rearward driving path of the towing vehicle 100.

In some examples, the sensor system 140 also includes one or more wheel encoders 144 associated with one or more wheels 112, 112a-d of the towing vehicle 100. The wheel encoder 144 is an electromechanical device that converts the angular position or motion of the wheel into an analog or digital output signal. Thus, the wheel encoder 144 determines the speed and distance that the wheels 112, 112a-d have traveled.

The sensor system 140 may also include one or more acceleration and wheel angle sensors 146 associated with the towing vehicle 100. The acceleration and wheel angle sensors 146 determine the acceleration of the towing vehicle 100 in the directions of the lateral axis X and the fore-aft axis Y.

The sensor system 140 may also include an IMU (inertial measurement unit) 148, the IMU 148 configured to measure linear acceleration (using one or more accelerometers) and rate of rotation (using one or more gyroscopes) of the towing vehicle. In some examples, the IMU 148 also determines a heading reference of the towing vehicle 100. Thus, the IMU 148 determines the pitch (pitch), roll (roll), and yaw (yaw) of the towing vehicle 100.

The sensor system 140 may include other sensors such as, but not limited to, radar, sonar, LIDAR (light detection and ranging, which may require optical remote sensing to measure properties of scattered light to find distance to distant targets and/or other information), LADAR (laser detection and ranging), ultrasound sensors, stereo cameras, and the like. The wheel encoders 144, acceleration and wheel angle sensors 146, IMU 148, and any other sensors output sensor data 145 to a controller 150, i.e., a coupling position estimation and tracking system 160.

Vehicle controller 150 executes a hitch position estimation and tracking system 160, which hitch position estimation and tracking system 160 receives image 143 from rear camera 143a and sensor data 145 from at least one of the other sensors 144, 146, 148, and based on the received data, hitch position estimation and tracking system 160 determines the position of trailer 200, specifically the hitch position L of hitch 212 associated with trailer 200TC. For example, the trailers 200, 200a-c are identified by the driver via the user interface 130. More specifically, the joint position estimation and tracking system 160 determines the pixel position of the joint 212 within the received image(s) 143. In addition, the coupler position estimation and tracking system 160 determines the 3D position L of the coupler 212 in a three-dimensional (3D) coordinate system or in a global coordinate systemTC. In some examples, coupler position estimation and tracking system 160 also determines a coupler height H of coupler 212 relative to road plane 10 in a 3D coordinate system and in a global coordinate systemTC. The hitch position estimation and tracking system 160 includes an iterative algorithm that automates the hitching and alignment process for the towing vehicle 100 and trailer 200.

The coupler position estimation and tracking system 160 receives the image 143 from the rear camera 142 a. For example, since the couple position estimation and tracking system 160 analyzes all sequences of images 143 received from the camera 142a, rather than analyzing only one or two images 143, the couple position estimation and tracking system 160 is making its couple with the couple 212Device position LTCThe determination is more robust.

In some implementations, the articulator position estimation and tracking system 160 instructs the user interface 130 to display the received image 143 on the display 132 and solicits from the user a selection of a region of interest (ROI) 300 within the displayed image 143 (fig. 4A and 4B). The ROI 300 is a bounding box that includes the coupler 212. In other examples, the join location estimation and tracking system 160 may include a join identification algorithm that identifies the joins 212 within the image 143 and forms the boundaries of the joins 212 by being bounding boxes of the ROI 300.

The articulator position estimation and tracking system 160 generates a semi-dense/dense point cloud of objects (e.g., articulators 212) within the ROI 300 (fig. 4B). A point cloud is a set of data points in 3D space, more specifically, a point cloud comprising a plurality of points on the outer surface of an object.

The coupler position estimation and tracking system 160 may use one or more techniques to locate the couplers 212 in the point cloud 400. Some of these techniques include, but are not limited to: visual Odometry (VO), simultaneous localization and mapping (SLAM), and motion recovery structure (SfM). The VO, SLAM and SfM frameworks are well-established theories and allow real-time localization of the towing vehicle 100 in a self-generated 3D point cloud map. VO is a method for determining the position and orientation of the trailer 200, the camera 142a, the coupler 212, or the tow bar 214 by analyzing the image 143 received from the camera 142 a. The VO method can extract image feature points and track them in a sequence of images. Examples of feature points may include, but are not limited to: an edge, corner or blob (blob) on the trailer 200, coupler 212 or tow bar 214. The VO method may also directly use the intensity of pixels in the image sequence as a visual input. SLAM methods construct or update a map of an unknown environment while keeping track of one or more targets. In other words, the SLAM method uses the received image 143 as the sole source of external information to establish the position and orientation of the towing vehicle 100 and camera 142a while constructing a representation of the object in the ROI 300. The SfM method estimates a 3D structure of an object in the ROI 300 based on the received image 143 (i.e., 2D image). The SfM method may estimate the pose of the camera 142a and the towing vehicle 100 based on a sequence of images 143 captured by the camera 142.

In some implementations, the hitch position estimation and tracking system 160 is initialized prior to executing the VO method, SLAM method, or SfM method. During a first method of initialization, coupler position estimation and tracking system 160 sends an instruction or command 190 to steering system 110 to cause steering system 110 to move towing vehicle 110 in a straight direction (e.g., in forward driving direction F or rearward driving direction R) along fore-aft axis Y for a predetermined distance. In some examples, the predetermined distance is a few centimeters. The predetermined distance may be between 5 cm and 50 cm. Forward F and backward R driving movements along the fore-aft axis Y will cause SLAM or SfM to initialize. Additionally, along forward F and backward R driving movements, the joint position estimation and tracking system 160 executes a tracker algorithm to update the ROI 300 within the image 143 provided by the driver or determined by the joint position estimation and tracking system 160. As the towing vehicle 100 moves in the rearward direction R, the perspective and dimensions of the trailer 200, tow bar 214, and coupler 212 change in the image 143. Thus, the tracker algorithm updates the ROI 300 based on new images 143 received from the camera 142a during forward F and backward R driving movements along the fore-aft axis Y. The ROI 300 includes a coupler 212, whereby feature points or pixel intensities in the ROI 300 are tracked by the coupler position estimation and tracking system 160. Because the articulator position estimation and tracking system 160 only analyzes the ROI 300 portion of the image 143, the ROI 300 is used to filter out objects in the image 143 that are not articulators 212. In some examples, the couple position estimation and tracking system 160 constructs a visual tracker for the couple 212 by identifying two-dimensional (2D) feature points in the ROI 300. The coupler position estimation and tracking system 160 then identifies 3D points within the point cloud map that correspond to the identified 2D feature points. Thus, at each iteration of the tracking algorithm (performed by the couple position estimation and tracking system 160), the couple position estimation and tracking system 160 projects the selected cloud point 402 onto the 2D camera image 143. The coupler position estimation and tracking system 160 then constructs a minimum ROI 340 that includes the projected 2D points. In this case, the hitch position estimation and tracking system 160 updates the ROI 300 while the towing vehicle is moving and generates a minimal ROI 340 containing the previously selected cloud points 402.

In some implementations, the coupling position estimation and tracking system 160 may be initialized by: instructions 190 are sent to the driving system 110 to cause the driving system 110 to move the towing vehicle 100 a predetermined distance toward the center of the ROI 300. In some examples, the predetermined distance is a few centimeters, such as 5 to 50 centimeters. In this case, the couple position estimation and tracking system 160 updates the ROI 300 during maneuvers of the towing vehicle 100, during the received images 143 or sequences.

In some implementations, the coupler position estimation and tracking system 160 determines dimensions of a 3D point cloud map. When a 3D point cloud map is generated by using only a monocular camera, it suffers from scale ambiguity, i.e., a map made with only a monocular camera can only be restored on a scale. However, if the dimensions of the map are not known to the hitch position estimation and tracking system 160, the hitch position estimation and tracking system 160 may determine the dimensions of the map by fusing VO, SLAM, or SfM algorithms with the vehicle sensor data 145. In another example, the coupler position estimation and tracking system 160 determines the dimensions of the map based on the road plane 320 in the 3D point cloud map 400. The coupler position estimation and tracking system 160 determines the distance from the camera position to the road plane 320 in the map 400. The scale of the map 400 is given by the height of the camera 142a (from the camera data 141) divided by the calculated distance of the camera position in the map 400 from the road plane 320. The 3D point cloud map represents the structure of the environment without providing detailed information about the distance of the structure within the map 400. Thus, coupling position estimation and tracking system 160 determines the dimensions of map 400 including the distance information, and this allows coupling position estimation and tracking system 160 to determine the position of coupling 212 in world coordinates.

The coupler position estimation and tracking system 160 includes a plane determination module 162, the plane determination module 162 configured to determine a camera plane 310 and a road plane 320. In some implementations, the plane determination module 162 determines the camera plane 310 along which the camera 142a moves and the road plane 320. To determine the camera plane 310, the plane determination module 162 uses at least three previous 3D positions of the camera 142a received from the camera 142a as the camera data 141. The camera data 141 may include intrinsic (intrinsic) parameters (e.g., focal length, image sensor format, and principal point) and extrinsic (extrinsic) parameters (e.g., coordinate system transformation from 3D world coordinates to 3D camera coordinates, in other words, extrinsic parameters define the position of the camera center and the heading of the camera in world coordinates). Additionally, camera data 141 may include a minimum/maximum/average height of camera 142a relative to the ground (e.g., when the vehicle is loaded and unloaded), and a longitudinal distance between camera 142a and vehicle hitch ball 122. The plane determination module 162 determines the camera plane 310 based on the 3D positions of three points of the at least three previous 3D positions of the camera 142 a. In some examples, the coupler position estimation and tracking system 160 determines the road plane 320 based on the camera plane 310. In some implementations, since the road plane 320 is a shift of the camera plane 310 by the height of the camera 142a from the ground (provided in the camera information 141), the plane determination module 162 determines the road plane 320 based on the camera plane 310 and the camera data 141. This process is useful when used to determine that the three 3D points of the camera plane 310 are collinear, in which case there are an infinite number of camera planes 310 that are coplanar with the line given by these 3D points.

To determine the road plane 320, the plane determination module 162 extracts at least three feature points associated with the road from the captured 2D image 143. Subsequently, the coupler position estimation and tracking system 160 determines the 3D positions of the three feature points within the point cloud 400, and then the coupler position estimation and tracking system 160 calculates the road plane 320 based on the three feature points. In some examples, the coupler position estimation and tracking system 160 determines the camera plane 310 based on the road plane 320. In some implementations, since the camera plane 310 is a shift of the road plane 320 by the height of the camera 142a from the ground (provided by the camera information 141), the coupler position estimation and tracking system 160 determines the camera plane 310 based on the road plane 320 and the camera information 141.

As the towing vehicle 100 autonomously moves in the rearward R direction, the plane determination module 162 may determine and update the planes 310, 320 in real time, or if the plane determination module 162 determines that the roadway is flat, the hitch position estimation and tracking system 160 may determine the planes 310, 320 only once. The above method uses three points to determine the camera plane 310 or the road plane 320. However, in some examples, the plane determination module 162 may rely on more than three points to determine the planes 310, 320. In this case, the joint position estimation and tracking system 160 determines the planes 310, 320 using a least squares method, a random sample consensus (RANSAC) method, a Support Vector Machine (SVM) method, or any variation of these algorithms. By using more than three points to determine planes 310, 320, plane determination module 162 increases robustness to outliers.

The coupler position estimation and tracking system 160 includes a point cloud module reduction module 164 configured to reduce the size of the ROI 300. In some implementations, the articulator position estimation and tracking system 160 selects a 3D cloud point 402 in the image 143 that corresponds to a 2D point contained in the ROI 300. The coupler position and estimation system 160 then uses the selected 3D cloud point 402 between the two planes (road plane 320 and camera plane 310). The selected 3D cloud points 402 between the two planes 310, 320 are represented as a set M.

In some examples, the joint position estimation and tracking system 160 projects a set of points in M (i.e., the selected 3D cloud point 402 between the two planes 310, 320) onto the camera plane 310 or the road plane 320. The projected extracted points are represented as set J. Subsequently, the joint position estimation and tracking system 160 determines the distance from each point within the set J to the center of the camera 142 a.

In some implementations, if the first method of initialization is used, the couple position estimation and tracking system 160 updates the minimum ROI 340 by projecting the point 402 in M onto the current camera 2D image 143. Then, the coupler position estimation and tracking system 160 determines an updated minimum box 340 containing the projected points in the camera frame 143 (2D image). The hitch position estimation and tracking system 160 updates the minimum ROI 340 because as time changes or the towing vehicle 100 moves, then the perspective of the 3D point 402 relative to the camera 142a also changes in position. Thus, by projecting the points 402 in the set M onto the image 143, the minimum ROI 340 is updated.

The coupling position estimation and tracking system 160 includes a coupling detection module 166, the coupling detection module 166 configured to detect the coupling 212 and determine a position L of the coupling 212TC. The joint location estimation and tracking system 160 selects a point J' from the set J (i.e., the projected extracted points). Point J 'indicates a point from set J that has the shortest distance between point J' and camera 142 a. As mentioned previously, set J is the projection of a point 402 in set M on the camera plane 310 or road plane 320. Thus, when the set J is projected on the camera plane 310, then the point J' is the point closest to the center of the camera (as shown in fig. 3). However, if the set J is projected on the road plane 320, the point J' is a point close to the projection of the camera center on the road plane 320. In some examples, if J 'includes more than one point, the joint position estimation and tracking system 160 determines an average or median of the points J'. The coupler position estimation and tracking system 160 determines the point associated with J' from the set M and projects the determined point from the set M onto the 2D image 143, which is indicative of the pixel position of the coupler 212 in the image.

In some implementations, the coupling detection module 166 determines the coupling position by: given a configurable integer parameter N, the joint position estimation and tracking system 160 selects the N points in the set J that are closest to the camera center (or the projection of the camera center on the road plane). This set of points is denoted J. The joint detection module 166 determines an average or median of the set of points J. The points in the set M associated with J projected onto the image 143 represent an estimate of the position of the trailer coupling on the image.

In some implementations, the coupler detection module 166 determines the coupler position by executing an identification algorithm to find the coupler 212 in the point cloud 400. The identification algorithm does not attempt to find a coupler in the image 143. The identification algorithm finds the coupler shape in the point cloud (3D world). Another option to simplify this step is: the identification algorithm is run in the camera movement plane (or road plane) using the points in set J.

The hitch position estimation and tracking system 160 includes a distance estimation module 168, the distance estimation module 168 configured to determine a distance D between the trailer hitch 212 and the vehicle traction ball 122CC(see FIG. 3). The distance estimation module 168 determines a first distance D between J 'projected onto the camera movement plane 310 and the camera center (or the projection of J' projected onto the road plane 320 and the camera center)CJ(as the minimum distance from the camera 142 a). The distance estimation module 168 based on the first distance DCJMinus the longitudinal distance D between the camera 142a and the vehicle hitch ball 122VCCTo determine a second distance D between coupler 212 and hitch ball 122CC. The second distance indicates a distance D between the trailer coupling 212 and the vehicle traction balls 122CC

The coupler position estimation and tracking system 160 includes a coupler height module 169, the coupler height module 169 determining a height H of the coupler 212 relative to the road plane 10TC. For example, the coupling height module 169 may determine the coupling position L determined by the coupling detection module 166TCDistance from the road plane 320. The coupler height module 168 may use the shortest distance between the road plane 320 and the coupler (if the coupler is represented by more than one point in the point cloud, the coupler is represented using an average point) to determine the coupler height HTC

Once the coupler position estimation and tracking system 160 determines the coupler height H in the global coordinate systemTCAnd the distance D between trailer coupler 212 and vehicle traction ball 122CCThen the coupling position estimation and tracking system 160 mayThe path planning system 170 is instructed to initiate planning of the path. The controller 150 executes a path planning system 170. The path planning system 170 determines the following paths: this path enables the towing vehicle 100 to autonomously drive in the rearward direction R towards the trailer 200 and to autonomously connect with the trailer 200.

As the towing vehicle 100 autonomously maneuvers along the planned path, the path planning system 170 continuously updates the path based on continuously receiving updated information from the coupler position estimation and tracking system 160 and the sensor system 140. In some examples, the object detection system identifies one or more objects along the planned path and sends data related to the location of the one or more objects to the path planning system 170. In this case, the path planning system 170 recalculates the planned path to avoid the one or more objects while also performing a predetermined maneuver to follow the path. In some examples, the path planning system determines a collision probability and if the collision probability exceeds a predetermined threshold, the path planning system 170 adjusts the path.

Once the planned path is determined by the path planning system 170, the vehicle controller 150 executes a driver assistance system 180, the driver assistance system 180 in turn including a path following behavior 182. The path following behavior 182 receives the planned path and executes one or more behaviors 182a-b that send commands 190 to the driving system 110, causing the towing vehicle 100 to autonomously drive along the planned path, which causes the towing vehicle 100 to autonomously connect to the trailer 200.

The path following behavior 182 includes: a braking action 182a, a speed action 182b, and a steering action 182 c. In some examples, the path-following behavior 182 also includes a hitch-connection behavior and a suspension-adjustment behavior. Each action 182a-182c may cause the towing vehicle 100 to take an action, such as driving backwards, turning at a particular angle, braking, accelerating, decelerating, among others. The vehicle controller 150 may cause the towing vehicle 100 to move in any direction across the road surface by controlling the driving system 110, and more specifically by issuing commands 190 to the driving system 110.

The braking action 182a may be performed to stop the towing vehicle 100 or to decelerate the towing vehicle 100 based on the planned path. The braking action 182a sends a signal or command 190 to the driving system 110 (e.g., a braking system (not shown)) to stop the towing vehicle 100 or reduce the speed of the towing vehicle 100.

The speed behavior 182b may be executed to change the speed of the towing vehicle 100 by accelerating or decelerating based on the planned path. The speed behavior 182b sends a signal or command 190 to the braking system 114 for deceleration or to the acceleration system 116 for acceleration.

The steering behavior 182c may be executed to change the direction of the towing vehicle 100 based on the planned path. Thus, the steering behavior 182c sends a signal or command 190 to the acceleration system 116 indicating the steering angle, causing the steering system 110 to change direction.

As previously discussed, the hitch position estimation and tracking system 160 determines the position of the trailer hitch 212 and tracks the hitch 212 in real time. In addition, the determined location is based on pixels within the received image 143 and in a global frame of reference. The coupler position estimation and tracking system 160 uses the distance to find the coupler 212 and is configured to filter out cloud points 402 that are not between the camera movement plane 310 and the ground plane 320. Thus, the coupling position estimation and tracking system 160 is feasible for real-time implementation.

The hitch position estimation and tracking system 160 receives the image 143 from the rear camera 142a, and therefore, the hitch position estimation and tracking system 160 does not require a priori knowledge of the size of the hitch ball 122 or trailer hitch 212. Additionally, the coupler position estimation and tracking system 160 does not determine the position of the coupler 212 within the image, but instead determines the ROI 300 and then determines the coupler position 212 within the 3D point cloud 400. The coupler position estimation and tracking system 160 uses a standard CPU with or without a GPU or graphics accelerator.

Fig. 6 provides an exemplary arrangement of the operations of a method 600 for detecting and locating the coupler 212 of a trailer hitch 210 associated with a trailer 200 located behind a towing vehicle 100 using the system described in fig. 1-5.

At block 602, the method 600 includes: one or more images 143 are received at the data processing hardware 152 from a camera 142a located on a rear portion of the towing vehicle 100 and in communication with the data processing hardware 152. At block 604, the method 600 includes: a region of interest (ROI) 300 within one or more images 143 is determined by the data processing hardware 152. The ROI 300 includes a representation of the trailer coupling 212. At block 606, the method 600 includes: the camera plane 310 in which the camera is moving is determined by the data processing hardware 152 based on the received image 143. At block 608, the method 600 includes: the road plane 320 is determined by the data processing hardware 152 based on the received image 143. At block 610, the method 600 includes: a three-dimensional (3D) point cloud 400 representing objects inside the ROI 300 and within the camera plane 310 and the road plane 320 is determined by the data processing hardware 152. At block 612, the method 600 includes: at the data processing hardware 152, the sensor data 145 is received from at least one of the wheel encoders 144, the acceleration and wheel angle sensors 146, and the inertial measurement unit 148 in communication with the data processing hardware 152. At block 614, the method 600 includes: determining, at the data processing hardware 152, a coupler position L of the trailer coupler 212 based on the 3D point cloud 400 and the sensor data 145TC. Coupling position LTCIn real world coordinates. At block 616, the method 600 includes: sending instructions 190 from the data processing hardware 152 to the steering system 110 to direct the towing vehicle 100 toward the coupler position L along a path in the rearward direction RTCDriving autonomously.

In some implementations, determining the ROI 300 within the image 143 includes: sending instructions from the data processing hardware 152 to the display 132 to display the received image 143; and receiving user selection 134 of ROI 300 at data processing hardware 152.

The method 600 may further include: points 402 associated with the 3D point cloud 400 are projected by the data processing hardware 152 onto the camera plane 310 or the road plane 320. The method 600 may further include: the distance between each point and the camera 142a is determined by the data processing hardware 152. When points associated with the 3D point cloud 400 are projected onto the camera plane310, the method 600 includes determining a distance between each point and a center of the camera 142 a. When the points 402 associated with the 3D point cloud 400 are projected onto the road plane 320, the method 600 includes determining the distance between each point and the projection of the center of the camera onto the road plane. The method 600 may further include: the shortest distance is determined by the data processing hardware 152 based on the determined distance. The projection of the 3D point on the received image 143 associated with the shortest distance from the camera center represents the coupler pixel location within the image 143. Coupling position LTCBased on the coupler pixel location. In some examples, the method 600 includes: determining, by data processing hardware 152, a coupler height H based on the distance between the 3D point associated with the shortest distance and road plane 320TC. Coupling position LTCIncluding coupling height HTC

In some implementations, the method 600 includes: determining, by the data processing hardware 152, a first distance D between the trailer coupler 212 and the camera 142a based on the 3D point cloud 400CJ. The method 600 further comprises: based on the first distance D by the data processing hardware 152CJMinus the longitudinal distance D between the camera 142a and the vehicle hitch ball 122VCCTo determine a second distance D between trailer coupler 212 and vehicle traction ball 122CC. The path is based on the second distance DCC

In some examples, determining the 3D point cloud 400 of the ROI 300 includes: one of a Visual Odometry (VO) algorithm, a simultaneous localization and mapping (SLAM) algorithm, and a motion recovery structure (SfM) algorithm is performed.

In some implementations, determining the camera plane 310 includes: determining at least three-dimensional positions of the rear camera 142a from the received image 143; and determining a camera plane 310 based on the at least three-dimensional positions. Determining the road plane 320 may include: determining the height of the camera from the road supporting the towing vehicle; and shifting the camera plane 310 toward the road 10 at the height of the camera 142 a.

In some examples, determining the road plane includes: extracting at least three feature points including the road surface 10 from the image 143; associating a point in the 3D point cloud 400 with each feature point; and determining a road plane based on at least three points in the 3D point cloud associated with the at least three feature points. In some examples, determining the camera plane includes: determining the height of the camera from the road; and shifting the road plane by the height of the camera.

Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor (which may be special or general purpose) coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.

These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.

Implementations of the functional operations and subject matter described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Furthermore, the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer storage medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The terms "data processing apparatus," "computing device," and "computing processor" encompass all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:轮毂电动机驱动装置用悬架结构

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!