Vehicle positioning device

文档序号:1821357 发布日期:2021-11-09 浏览:9次 中文

阅读说明:本技术 车辆定位装置 (Vehicle positioning device ) 是由 龟冈翔太 篠本凛 于 2019-04-04 设计创作,主要内容包括:本发明涉及车辆定位装置,连接于输出卫星定位数据的第1传感器以及检测车辆的状态量并作为状态量数据而输出的第2传感器,并且连接于检测地面物体并输出地面物体与车辆的相对关系的数据的第3传感器以及检测道路线形并输出道路线形数据的第4传感器中的至少一方。观测值处理部,接受卫星定位数据、相对关系的数据以及道路线形数据中的至少一方,将这些数据合并而生成实际观测值;传感器校正部,连接于第2传感器,校正状态量数据中包含的传感器误差;惯性定位部,使用由传感器校正部校正后的校正后传感器值进行惯性定位,作为惯性定位结果而输出;观测值预测部,至少使用惯性定位结果预测观测值,作为预测观测值而输出;使用预测观测值以及实际观测值进行定位运算,作为定位结果而输出。(The present invention relates to a vehicle positioning device connected to at least one of a 1 st sensor for outputting satellite positioning data, a 2 nd sensor for detecting a state quantity of a vehicle and outputting the state quantity data, a 3 rd sensor for detecting a land object and outputting data on a relative relationship between the land object and the vehicle, and a 4 th sensor for detecting road alignment and outputting road alignment data. An observation value processing unit that receives at least one of the satellite positioning data, the data of the relative relationship, and the road alignment data, and combines these data to generate an actual observation value; a sensor correction unit connected to the 2 nd sensor for correcting a sensor error included in the state quantity data; an inertial positioning unit that performs inertial positioning using the corrected sensor value corrected by the sensor correction unit and outputs the inertial positioning result; an observation value prediction unit that predicts an observation value using at least the inertial positioning result and outputs the prediction observation value; and performing a positioning operation using the predicted observation value and the actual observation value, and outputting the result as a positioning result.)

1. A vehicle positioning device mounted on a vehicle for positioning a position of the vehicle,

the vehicle positioning device is connected to at least one of a 1 st sensor that outputs satellite positioning data, a 2 nd sensor that detects a state quantity of the vehicle and outputs state quantity data, a 3 rd sensor that detects a land object and outputs data of a relative relationship between the land object and the vehicle, and a 4 th sensor that detects road alignment and outputs road alignment data,

the vehicle positioning device is provided with:

an observation value processing unit that receives at least one of the satellite positioning data, the relative relationship data, and the road alignment data, and combines these data to generate an actual observation value;

a sensor correction unit connected to the 2 nd sensor for correcting a sensor error included in the state quantity data;

an inertial positioning unit that performs inertial positioning using the corrected sensor value corrected by the sensor correction unit and outputs the inertial positioning result;

an observation value prediction unit that predicts an observation value using at least the inertial positioning result and outputs the result as a prediction observation value; and

a filter that performs a positioning operation using the predicted observation value and the actual observation value, outputs the result as a positioning result, estimates the sensor error, and outputs the result as a sensor correction amount,

the filter feeds back the sensor correction amount to the sensor correction section,

the sensor correction section corrects the sensor error using the sensor correction amount.

2. The vehicle positioning apparatus according to claim 1,

the vehicle positioning device is connected to the 1 st sensor, the 2 nd sensor, and the 3 rd sensor, and is connected to a land object information storage unit in which land object information is stored,

the 3 rd sensor calculates data of the relative relationship using a land object detection result of the land object detected and the land object information stored in the land object information storage unit,

the observation processing unit generates the actual observation by combining the satellite positioning data and the data of the relative relationship,

the observation value predicting unit calculates the predicted observation value using the inertial positioning result and the land object information stored in the land object information storing unit.

3. The vehicle positioning apparatus according to claim 1,

the vehicle positioning device is connected to the 1 st sensor, the 2 nd sensor, and the 4 th sensor, and is connected to a road information storage unit in which road information is stored,

the vehicle positioning device includes a road alignment calculation unit that calculates a road alignment used for the calculation of the predicted observation value in the observation value prediction unit,

the road alignment calculation section calculates the road alignment using the road information stored in the road information storage section,

the observation value processing unit generates the actual observation value by combining the satellite positioning data and the road alignment data,

the observation value predicting section calculates the predicted observation value using the inertial positioning result and the road alignment calculated by the road alignment calculating section.

4. The vehicle positioning apparatus according to claim 1,

the vehicle positioning device is connected to the 1 st sensor, the 2 nd sensor, the 3 rd sensor, and the 4 th sensor, and is connected to a land object information storage unit in which land object information is stored and a road information storage unit in which road information is stored,

the vehicle positioning device includes a road alignment calculation unit that calculates a road alignment used for the calculation of the predicted observation value in the observation value prediction unit,

the 3 rd sensor calculates data of the relative relationship using a land object detection result of the land object detected and the land object information stored in the land object information storage unit,

the observation value processing unit generates the actual observation value by combining the satellite positioning data, the data of the relative relationship, and the road alignment data,

the observation value predicting section calculates the predicted observation value using the inertial positioning result, the land object information stored in the land object information storing section, and the road alignment calculated by the road alignment calculating section.

5. The vehicle positioning apparatus according to claim 1,

the filter outputs the inertial positioning result output from the sensor correction unit as the positioning result when data is not output from the 1 st sensor, the 3 rd sensor, and the 4 th sensor.

6. The vehicle positioning apparatus according to claim 2 or 4,

the land object information storage unit has, as the land object information, information on an absolute position and a land object type of the land object that is actually present in the vicinity of the road,

the 3 rd sensor calculates data of the relative relationship with the vehicle based on the detected absolute position of the land object, and outputs the data in association with the land object.

7. The vehicle positioning apparatus according to claim 1,

the 1 st sensor includes as observations an analog range observation, a Doppler observation, and a carrier-phase observation,

the observation processing portion uses at least one of the simulated range observation, the doppler observation, and the carrier-phase observation for generation of the actual observation.

8. The vehicle positioning apparatus according to claim 1,

in the 3 rd sensor, a relative distance, a relative speed, and a relative angle to the land object are included as observed values,

the observation value processing section uses at least one of the relative distance, the relative speed, and the relative angle for generation of the actual observation value.

9. The vehicle positioning apparatus according to claim 1,

the 4 th sensor includes, as observed values, a lateral position deviation, a deviation angle, a curvature, and a curvature change,

the observation value processing section uses at least one of the lateral position deviation, the deviation angle, the curvature, and the curvature change for the generation of the actual observation value.

10. The vehicle positioning apparatus according to claim 3 or 4,

the road information storage unit includes a lane link, a node on the lane link, and a lane area as the road information,

the road alignment calculation unit uses at least one of the lane link, a node on the lane link, and the lane area for calculation of the road alignment.

11. The vehicle positioning apparatus according to claim 7,

the observation predictor predicts at least one of the simulated distance observation, the doppler observation, and the carrier-phase observation as the predicted observation.

12. The vehicle positioning apparatus according to claim 1,

the filter performs the positioning operation using the prediction observation and the actual observation, obtains a probability distribution of the state quantity data output from the 2 nd sensor, and estimates the sensor error from the most likely state quantity.

Technical Field

The present invention relates to a vehicle positioning device mounted on a vehicle and positioning a position of the vehicle, and more particularly to a vehicle positioning device that is less susceptible to radio wave conditions from satellites.

Background

Conventionally, a vehicle positioning device has been proposed that uses a Satellite positioning result based on a GNSS (Global Navigation Satellite System) to correct an output of an inertial sensor with an error offset with respect to a positioning result of inertial positioning, thereby improving positioning accuracy of a vehicle. Such a technique can improve the accuracy of inertial positioning by positioning the offset error of the inertial sensor in a satellite-replenishable time zone, and can maintain the positioning accuracy even when the satellite cannot be captured, for example, when a vehicle is present indoors or in a tunnel. For example, in paragraph [0007] of patent document 1, there is disclosed a Positioning technique aiming at obtaining seamless and high-precision Positioning using an indoor GPS (Global Positioning System) for indoor use in addition to satellite Positioning, both indoors and outdoors.

Documents of the prior art

Patent document

Patent document 1: japanese patent laid-open publication No. 2016-170124

Disclosure of Invention

In the conventional vehicle positioning technology, positioning can be continued even in a room where a satellite cannot be captured, but on the other hand, even if the satellite is supplemented, there is a problem that positioning accuracy cannot be maintained even when a satellite positioning signal is significantly deteriorated. Such a case includes positioning in an urban area where radio waves from satellites are reflected by a structure such as a building and multipath is likely to occur to reach GNSS antennas through a plurality of paths, and in a mountain area where the intensity of radio waves from satellites is reduced by trees.

The present invention has been made to solve the above-described problems, and an object thereof is to provide a vehicle positioning device that can maintain positioning accuracy and expand the situation in which positioning can be continued.

The present invention provides a vehicle positioning device mounted on a vehicle for positioning a position of the vehicle, wherein the vehicle positioning device is connected to at least one of a 1 st sensor for outputting satellite positioning data, a 2 nd sensor for detecting a state quantity of the vehicle and outputting the state quantity data, a 3 rd sensor for detecting a land object and outputting data of a relative relationship between the land object and the vehicle, and a 4 th sensor for detecting a road alignment and outputting road alignment data, and the vehicle positioning device includes: an observation value processing unit that receives at least one of the satellite positioning data, the relative relationship data, and the road alignment data, and combines these data to generate an actual observation value; a sensor correction unit connected to the 2 nd sensor for correcting a sensor error included in the state quantity data; an inertial positioning unit that performs inertial positioning using the corrected sensor value corrected by the sensor correction unit and outputs the inertial positioning result; an observation value prediction unit that predicts an observation value using at least the inertial positioning result and outputs the result as a prediction observation value; and a filter that performs a positioning operation using the predicted observation value and the actual observation value, outputs the result of positioning, estimates the sensor error, and outputs the result of positioning as a sensor correction amount, wherein the filter feeds back the sensor correction amount to the sensor correction unit, and the sensor correction unit corrects the sensor error using the sensor correction amount.

According to the vehicle positioning device of the present invention, even in a situation where multipath is likely to occur, positioning accuracy can be maintained, and the situation where positioning can be continued can be expanded.

Drawings

Fig. 1 is a functional block diagram showing the structure of a vehicle positioning device according to embodiment 1 of the present invention.

Fig. 2 is a diagram illustrating positioning in a multipath environment.

Fig. 3 is a diagram illustrating static information of a digital map.

Fig. 4 is a diagram illustrating a vehicle navigation coordinate system.

Fig. 5 is a flowchart illustrating the overall process flow of the vehicle positioning device according to embodiment 1 of the present invention.

Fig. 6 is a diagram illustrating a relative positional relationship with a land object.

Fig. 7 is a functional block diagram showing the structure of a vehicle positioning device according to embodiment 2 of the present invention.

Fig. 8 is a flowchart illustrating the overall process flow of the vehicle positioning device according to embodiment 2 of the present invention.

Fig. 9 is a diagram illustrating a road alignment.

Fig. 10 is a flowchart illustrating the calculation processing of the road alignment.

Fig. 11 is a conceptual diagram illustrating road alignment.

Fig. 12 is a functional block diagram showing the structure of a vehicle positioning device according to embodiment 3 of the present invention.

Fig. 13 is a diagram showing a hardware configuration of a vehicle positioning device that realizes embodiments 1 to 3 of the present invention.

Fig. 14 is a diagram showing a hardware configuration of a vehicle positioning device that realizes embodiments 1 to 3 of the present invention.

Detailed Description

< first >

In the embodiments described below, the land object includes a road sign, a traffic light, a telegraph pole, and the like, the road information includes node data, lane width, gradient data, and the like relating to the road, the autonomous sensor is a sensor that detects a state quantity of the vehicle itself on which the vehicle positioning device of the embodiments is mounted, and includes a speedometer, an accelerometer, an angular velocity meter, and the like, the external sensor is a sensor that detects information relating to an environment in which the vehicle on which the vehicle positioning device of the embodiments is mounted is present, and includes a laser range finder, a camera, and a radar, and the road alignment is a shape of the road, and the road alignment data includes information such as how a combination of a straight line and a curved line is, and how much the gradient is.

< embodiment 1>

< structure of apparatus: outline >

Fig. 1 is a functional block diagram showing the structure of a vehicle positioning device 20 according to embodiment 1 of the present invention. As shown in fig. 1, the GNSS sensor 1 (the 1 st sensor) and the land object detection unit 3 (the 3 rd sensor) are connected to the vehicle positioning device 20 as external sensors, and detect external information around the vehicle. The land object information storage unit 4 and the autonomous sensor 6 (No. 2 sensor) are connected to the vehicle positioning device 20.

The vehicle positioning device 20 includes an observation value processing unit 12, a filter 13, a sensor correction unit 14, an inertia positioning unit 15, and an observation value prediction unit 16.

The observation value processing unit 12 has the following functions: the satellite observation data output from the GNSS sensor 1 and the data on the relative relationship with the terrestrial object output from the terrestrial object detection unit 3 are received, and the actual observation value necessary for the positioning calculation and the estimation of the correction amount of the state quantity data output from the main sensor 6 are calculated and output to the filter 13.

The sensor correction unit 14 has the following functions: the scale factor (scale factor) error and the offset error of the autonomous sensor 6 are corrected, and are output to the inertia positioning unit 15 as corrected sensor values.

The inertial positioning portion 15 has the following functions: the corrected sensor values input from the sensor correction unit 14 are used to perform inertial positioning calculation of the position, attitude, speed, and the like as the positioning result of the vehicle, and the result is output to the observation value prediction unit 16 as the inertial positioning result.

The observation value prediction unit 16 has the following functions: using the inertial positioning result input from the inertial positioning unit 15 and the land object information input from the land object information storage unit 4, a prediction observation value necessary for performing positioning calculation and estimation of the correction amount of the state quantity data output from the autonomous sensor is calculated and output to the filter 13.

The filter 13 has the following functions: the positioning calculation and the sensor error of the autonomous sensor 6 are estimated using the actual observation value input from the observation value processing unit 12 and the predicted observation value predicted by the observation value prediction unit 16. The sensor error estimated by the filter 13 is fed back to the sensor correction unit 14 as a sensor correction amount. The result of the positioning calculation in the filter 13 is an output of the vehicle positioning device 20 as a positioning result. In addition, when data cannot be obtained from the external sensors, the filter 13 takes the inertial positioning result output from the inertial positioning unit 15 as the output of the vehicle positioning device 20.

In addition, in the filter 13, the sensor correction amount is estimated at the timing when data is obtained from the external sensor, but the value is held, so that even when data cannot be obtained from the external sensor, sensor correction can be performed, and positioning accuracy can be maintained.

According to the vehicle positioning device 20 having the above-described configuration, when data is obtained from any one of the GNSS sensor 1 and the land object detection unit 3, the positioning calculation and the calculation of the sensor correction amount can be performed, and the positioning accuracy can be maintained. Furthermore, when data is obtained from both the GNSS sensor 1 and the land object detection unit 3, a more reliable positioning result can be output.

Further, since the actual observation value obtained using the data from the GNSS sensor 1 and the land object detection unit 3 and the predicted observation value obtained using the data from the autonomous sensor 6 are fused by the filter 13, a switching operation for selecting one of the actual observation value and the predicted observation value is not required. Therefore, it is possible to suppress a bad phenomenon such as a positioning jump in which the positioning result becomes discontinuous at the time of switching.

Further, by using a plurality of external sensors, it is possible to perform processing for determining an abnormal value in probability and not using the abnormal value for positioning calculation when an abnormal value is output from any of the external sensors. As a result, the reliability of the positioning calculation can be maintained.

Further, since the update cycle of the sensor value of the external sensor is generally a low cycle, generally about 10Hz, the requirement cannot be satisfied only by the external sensor when positioning is performed at a high cycle. On the other hand, the autonomous sensor obtains a sensor value at a high cycle, generally about 100Hz, and can perform inertial positioning at the cycle. Therefore, in the vehicle positioning device 20, it is also possible to obtain an effect that the positioning result can be acquired at a higher cycle than the case where only the sensor value in the outside world sensor is used.

Fig. 2 is a diagram schematically showing an example of a decrease in satellite positioning accuracy when a satellite can be captured. As shown in fig. 2, in a place where a plurality of buildings BD stand in an urban area or suburb, there are cases where a reflected radio wave WR, which is a radio wave from a satellite ST and is reflected by a structure such as the building BD to reach a vehicle OV, and a direct radio wave WS, which is a direct radio wave directly reaching the vehicle OV, are received simultaneously. When satellite positioning is performed under such a situation, there is a problem that positioning accuracy is generally significantly degraded and the positioning accuracy of satellite positioning cannot be maintained. The technique disclosed in japanese patent laid-open No. 2016-170124, which has been described as a conventional technique, has a difficulty in solving this problem.

On the other hand, according to the vehicle positioning device 20, even in the situation as shown in fig. 2, the positioning accuracy can be maintained by using the data of the relative relationship with the land object. That is, not only the GNSS sensor 1 but also information that can be observed by the vehicle OV such as the delimiting line CL and the land object FE are combined, and the situation where positioning can be performed can be expanded.

An example of a positioning device using information on a ground object is disclosed in japanese patent application laid-open No. 2005-265494. In the vehicle position estimation device proposed in the above-mentioned document, the on-vehicle camera detects the ground object and the white line and performs positioning. However, since the position is estimated only by the speedometer in a time zone in which the ground object and the white line are not detected, it is impossible to cope with the road alignment in the curved shape, and since the sensor error is not estimated from the detection of the ground object and the white line, if the speedometer has an error, the error of the sensor is accumulated, and as a result, the positioning accuracy is lowered.

On the other hand, in the vehicle positioning device 20, the sensor error is estimated in a time period in which the external information by the external sensor can be observed, and the sensor error estimated in a time period in which the external information can be observed is used in a time period in which the external information cannot be observed, so that the accuracy of the inertial positioning can be maintained.

In recent years, digital maps that distribute highly detailed static and dynamic information for use in automatic driving of vehicles have been organized in various countries, and are referred to as dynamic maps in japan. As a reference thereof, dynamic map finishing (system/control/information, vol.60, No. 11pp.463-468,2016) in automatic traveling is cited.

In the above-mentioned reference, the concept of what information is distributed about dynamic maps is explained. Fig. 3 shows an example of information on a land object distributed as static information in the dynamic map described in the above reference. In fig. 3, as the static information, absolute position information of an actually existing ground object actually existing on a road such as a road junction, a road sign, a guardrail, a power post, and a traffic light, and map information such as road center coordinates are incorporated. Further, information such as a lane link, a node on the lane link, and a lane area is incorporated as a virtual ground object created from an actual existing ground object.

In addition, the lane means a ground object whose boundary is a physically restricted boundary line such as a wall, a guardrail, a curb, or the like, and a tunnel, a tunnel portal, or the like, which has a height restriction. The boundary line of the lane is limited by a zone line (white line or the like) of the traffic zone.

In the vehicle positioning device 20, a dynamic map can be applied as the database of the land object information storage unit 4.

Distribution of digital maps including dynamic maps is being studied in various countries around the world, and is being standardized by international organization for standardization (ISO) and the like, and therefore is being used in a standardized manner in various countries around the world.

< structure of apparatus: details >

The respective functional blocks of the vehicle positioning device 20 are explained in detail.

< GNSS sensor >

The GNSS antenna 1a is connected to the GNSS sensor 1. The GNSS sensor 1 receives a positioning signal from a positioning satellite orbiting a satellite orbit by using the GNSS antenna 1, and performs signal processing on the received positioning signal, thereby being able to acquire various kinds of observation data from the satellite.

The GNSS sensor 1 includes the following functions: in addition to the positioning calculation result obtained by performing the positioning calculation within the GNSS sensor 1 based on the outputted setting, the observation data of the GNSS before the positioning calculation is outputted as the positioning raw data. The positioning raw data includes an analog distance observation value, a doppler observation value, and a carrier phase observation value, and these observation values can be obtained for each frequency domain (for example, L1 frequency band, L2 frequency band, L5 frequency band, and the like) distributed from a satellite.

Examples of the positioning satellites include GPS in the united states, Global Navigation Satellite System (GLONASS) in russia, Galileo in europe, Quasi-Zenith Satellite System (QZSS) in japan, Beidou in china, and NavIC in india (Navigation Indian Constellation), and the vehicle positioning devices according to embodiments 1 to 3 of the present invention can be applied to all of these satellites.

The positioning operation in the GNSS sensor 1 can perform any one of positioning modes such as individual positioning, DGPS (Differential GPS) positioning, RTK (Real time Kinematic) positioning, network RTK positioning, and the like. In addition, the GNSS sensor 1 can generally output reliability information of the positioning calculation result.

The single positioning is one of satellite positioning systems that perform positioning using simulated distance observation values received from 4 or more positioning satellites.

DGPS positioning is a positioning method in which a satellite positioning result with higher accuracy than that of single positioning can be obtained by performing positioning calculation using satellite positioning error enhancement data that can be generated from a satellite navigation augmentation system (SBAS), an electronic reference point, and a private fixed station.

The RTK positioning is a positioning method in which satellite raw data of an electronic reference point and a private fixed station is transferred to a mobile station, and a cause of a satellite positioning error in the vicinity of a base station is removed to enable high-precision satellite positioning. In the RTK positioning, when it is determined with high reliability using an integer variable called ambiguity, the positioning can be performed with accuracy of the order of cm. The positioning solution at this time is called Fix solution, and if the ambiguity is not obtained, the Float solution is output.

The network type RTK positioning is a positioning method for acquiring satellite positioning data equivalent to a base station setting by using a network to perform high-precision positioning.

The GNSS sensor 1 can calculate the absolute velocity of the GNSS antenna 1a in the 3-axis direction of the earth, for example, the north, east, and vertical directions, using the doppler observation value and the satellite navigation data in addition to the absolute position information such as the latitude, longitude, and altitude. The direction in which the GNSS antenna 1a moves, that is, the azimuth, can be detected using the absolute velocity information. The satellite positioning method applicable to the vehicle positioning device according to embodiments 1 to 3 of the present invention includes all satellite positioning methods other than the above-described method.

< floor object detecting section >

The land object detection unit 3 has the following functions: a land object is detected using a surrounding recognition camera mounted on a vehicle and LiDAR (Light Detection and Ranging), also called a laser range finder, a radar, or the like, and a relative relationship between the land object existing around the vehicle and a display content of the land object are output as a land object Detection result.

The relative relationship between the land object and the vehicle means a relative distance relationship, a relative speed relationship, and the like between the land object and a land object representative point (point) corresponding to the coordinate information acquisition of the map database from a navigation center (for example, a rear axle center) of the vehicle in the vehicle navigation coordinate system.

The vehicle navigation coordinate system is, for example, as shown in fig. 4, in many cases, the following coordinate system is used: the zb axis is taken in a relationship such that the center of the rear axis of the navigation center of the vehicle OV is the origin, the forward direction of the vehicle is the xb axis, the left direction is the yb axis, and the xb axis and the yb axis are the rules of right-handed rotation.

Further, the land object detection unit 3 can output information on a land object associated with a relative relationship between the land object and the vehicle by referring to the land object database of the land object information storage unit 4 holding land object information on the land object and combining the result of detection by the land object detection unit 3.

The land object information related to the land object is a land object category such as an electric pole, a road sign, a guardrail, an absolute position coordinate, and a display content. That is, the land object detection unit 3 can simultaneously output the distance to the land object defined in the vehicle coordinate system, the speed, the type of the land object, the absolute position coordinates, the display contents, and the like.

Since the GNSS sensor 1 and the land object detection unit 3 as the external sensors have different detection targets, the states that can be detected by the sensors are different. For example, in the GNSS sensor 1, an observation value cannot be obtained in a situation where a satellite cannot be supplemented such as a tunnel. In addition, the land object detection unit 3 cannot output a detection result in a section where there is no land object.

< observation value processing section >

The observation value processing unit 12 has the following functions: the observation value obtained from the GNSS sensor 1 and the relative relationship between the land object and the vehicle obtained from the land object detection unit 3 are processed in combination, and are transmitted to the filter 13 as an actual observation value.

< sensor correction part >

The sensor correction unit 14 has a function of correcting the sensor data obtained from the autonomous sensor 6 using the sensor correction amount estimated by the filter 13.

< autonomous sensor >

The autonomous sensor 6 includes a speedometer that measures a vehicle speed of the vehicle, an Inertial Measurement Unit (IMU) that measures an acceleration and an angular velocity of the vehicle, a steering angle meter that measures a steering angle of the vehicle, and the like, and is used to position a position, a speed, and an attitude of the vehicle.

The speedometer is attached to a wheel of a vehicle, and has a function of converting an output from a pulse sensor for detecting a rotation speed of the wheel into a vehicle speed of the vehicle.

The IMU is installed on a roof or in a room of a vehicle, and has a function of detecting acceleration and angular velocity in a vehicle coordinate system. Devices such as MEMS (Micro Electric Mechanical System) and Fiber Optic gyroscopes (Fiber optical Gyroscope) are incorporated in IMU and sold.

< inertial positioning part >

The inertial positioning portion 15 has the following functions: the sensor value of the autonomous sensor 6 corrected by the sensor correction unit 14 and the motion model of the vehicle are used to locate the position, speed, and posture of the vehicle from the vehicle speed, acceleration, and integrated value of the angular velocity sensor of the vehicle.

The inertial positioning unit performs positioning by accumulating sensor values obtained from the autonomous sensor 6 at every moment based on a motion model of the vehicle. In general, since the sensor values of the autonomous sensor 6 include scale factor errors, offset errors, and the like, which are accumulated at every moment, the accuracy of inertial positioning deteriorates with time.

On the other hand, in the vehicle positioning devices according to embodiments 1 to 3 of the present invention, the sensor error is estimated by the filter, and the sensor value from the main sensor 6 is corrected as the sensor correction amount, so that the accuracy of the inertial positioning can be improved.

< observation value predicting section >

The observation value prediction unit 16 has the following functions: the actual observation value calculated and processed by the observation value processing unit 12 is used to calculate a predicted observation value using the inertial positioning result and the information on the ground object around the vehicle, and is output to the filter 13.

< Filter >

The filter 13 has the following functions: by estimating the most probable state quantity in terms of probability using the actual observation value obtained from observation value processing unit 12 and the predicted observation value obtained from observation value prediction unit 16, the positioning calculation of the position, velocity, orientation, and the like, and further the sensor errors such as scale factor error, offset error, and the like, which autonomous sensor 6 has are estimated.

Here, the state quantity refers to a position, a speed, and a posture of the vehicle in the 3-dimensional direction, a sensor error of the autonomous sensor 6, and the like. The positioning calculation result estimated by the filter 13 is output as a positioning result from the vehicle positioning device 20, and the sensor error is input as a sensor correction amount to the sensor correction unit 14.

< action >

Next, the overall processing flow of the vehicle positioning device 20 will be described with reference to a flowchart shown in fig. 5. When the vehicle positioning device 20 starts positioning, first, an initial value of inertial positioning and a current inertial positioning result used in the observation value prediction section 16 are acquired (step S100).

In addition, when the current inertial positioning result cannot be obtained immediately after the power supply of the vehicle positioning device 20 is turned on, the estimated positioning result may be obtained from the GNSS sensor 1 and used, or a predetermined value may be used as an initial value of the inertial positioning.

Next, the vehicle positioning device 20 determines whether or not data is obtained from an external sensor such as the GNSS sensor 1 or the land object detection unit 3 (step S101). Although the external sensors have different detection targets, the process proceeds to step S102 when there are 1 sensors that obtain sensor values (yes), and proceeds to step S131 when there are no sensors (no).

When the vehicle positioning device 20 cannot obtain data from the external sensor, the inertial positioning result obtained in the inertial positioning calculation of step S113 described later is output as the positioning result of the vehicle positioning device 20 in step S131.

In step S102, the observation value processing unit 12 processes the sensor value obtained by the external sensor so as to be usable by the filter 13 in the next step S103, and outputs the processed sensor value as an actual observation value.

Here, the processing for the observation values obtained by the GNSS sensor 1 and the land object detection unit 3 in the observation value processing unit 12 will be described.

< processing for observed value in GNSS sensor >

In embodiment 1, the GNSS sensor 1 outputs coordinate information of latitude, longitude, altitude, and azimuth at the phase center of the GNSS antenna 1a and the respective reliabilities. Normally, the slave GNSS sensor 1 transmits the sensor values according to a protocol defined in NMEA (National Marine Electronics Association ), but since the output specifications differ depending on the manufacturer, the observation value processing unit 12 converts the sensor values obtained by the GNSS sensor 1 into a unit system such as deg or rad with respect to latitude, longitude, and azimuth, and into a unit system such as m (meter) with respect to altitude.

< processing for observed value in ground object detection section >

In embodiment 1, the absolute position coordinates of the representative point of the land object, for example, the latitude, longitude, and altitude information, and the relative positional relationship between the vehicle navigation center and the representative point of the land object are obtained from the land object detecting unit 3. Various methods are used to express the relative positional relationship, but here, as shown in fig. 6, the relative distance and the relative azimuth from the navigation center of the vehicle OV to the land object representative point are used.

XYZ in fig. 6 represents an absolute coordinate system, and in this case, an ECEF coordinate system (Earth Centered Earth Fixed) coordinate system as a coordinate system Fixed to the Earth is represented with the center of gravity of the Earth as the origin. The coordinates (Xi, Yi, Zi) and (Xb, Yb, Zb) of fig. 6 represent the absolute position coordinates of the land object FE and the absolute position coordinates of the navigation center of the vehicle OV, respectively. In addition, zi is not shown. In fig. 6, the axes n, e, and d represent coordinate systems in which coordinate axes are taken in the north, east, and vertical directions with the navigation center of the vehicle OV as the origin, and are referred to as NED coordinate systems. Here, ψ in the figure indicates a vehicle azimuth in which the clockwise direction is positive with respect to north.

When the output of the land object detection unit 3 is the coordinate (xi, yi, zi) with respect to the land object FE, the relative distance ρ ism,iAnd phaseTo the azimuth thetam,iThe values are obtained by the following numerical expressions (1) and (2), respectively.

[ mathematical formula 1]

[ mathematical formula 2]

These pieces of information are output together with coordinates (Xi, Yi, Zi) as coordinates of the land object representative point. Note that, although only the relative distance and the relative azimuth have been described here, the relative velocity information can be output as the output of the land object detection unit 3, and therefore the relative velocity information may be used as the observation value. In addition, only the relative azimuth is used in this case, but the relative pitch angle (pitch angle), the relative pitch angle (roll angle), and the like are also obtained from the coordinates (xi, yi, zi), and therefore, they can be added as the observed values. In this case, since the vehicle attitude angle and the like can also be determined, more advanced positioning can be performed. Here, the pitch angle and the tilt angle are names relating to the angles around the yb axis and around the xb axis shown in fig. 4, respectively.

Here, returning to the description of the flowchart of fig. 5, the flow of step S111 and thereafter will be described.

The vehicle positioning device 20 acquires the sensor value from the autonomous sensor 6 in parallel with the determination of step S101 (step S111). That is, acceleration and angular velocity are acquired from the IMU, and vehicle speed information and the like are acquired from a speedometer.

< correction of sensor value of autonomous sensor >

Next, the sensor correction unit 14 corrects the sensor value acquired from the autonomous sensor 6 (step S112). Hereinafter, a case will be described in which a speedometer and an angular velocity in the yaw axis direction of the vehicle (hereinafter, referred to as a yaw rate) sensor are used as the autonomous sensor 6, and the sensor error model represented by the following expressions (3) and (4) is used for correction.

[ mathematical formula 3]

V=(l+sv)Vt …(3)

V: sensor value of vehicle speed

Vt: true value of vehicle speed

sv: scale factor of vehicle speed

[ mathematical formula 4]

γ=(l+sγ)(γt+bγ) …(4)

γ: yaw rate sensor value

γt: true value of yaw rate

sγ: scaling factor of yaw rate

bγ: yaw rate sensor biasing

Equation (3) is the true value V for vehicle speedtMultiplying by a scaling factor s of the vehicle speedvEquation (4) is the true value gamma for the yaw ratetOffset of overlapping yaw rate sensors bγAnd multiplied by a scaling factor s of the yaw rateγThe model of (1).

In this example, s is set to be in the filter 13 as followsv,sγAnd bγRespective guess values sve,sγeAnd bγeIs presumed to be a sensor error. The sensor correction unit 14 corrects the sensor value from the main sensor 6 by the following expressions (5) and (6) using the estimated value of the sensor error.

[ math figure 5]

[ mathematical formula 6]

In the numerical expressions (5) and (6), VeAnd gammaeCorrected vehicle speed and yaw rate, respectively. The sensor error model described above is an example, and other sensor error models may be used.

< processing in inertial positioning unit >

Here, returning to the description of the flowchart of fig. 5, the process of step S113 is performed. That is, the inertial positioning unit 15 performs inertial positioning calculation using the corrected sensor values and the motion model of the vehicle. A specific calculation method for the inertial positioning calculation is modeling in which the vehicle moves in a substantially plane. In addition, hereinafter, the navigation coordinate System of an ellipsoid following GRS80(Geodetic Reference System 1980) is used for expression. First, a state variable represented by the following expression (7) is defined.

[ math figure 7]

Y in the numerical formula (7)dThe state vector is a state vector relating to inertial positioning that is a collection of state variables relating to inertial positioning. In addition, λdRepresenting the latitude obtained in the inertial positioning operation,representing the longitude, h, obtained in the inertial positioning calculationdIndicating the height of the ellipsoid, psi, obtained in the inertial positioning operationdIndicating the orientation obtained in the inertial positioning operation.

The state variables are modeled by a motion model represented by the following equation (8).

[ mathematical formula 8]

λd: latitude [ rad ] based on inertial positioning]

Longitude [ rad ] based on inertial positioning]

hd: ellipsoidal height [ m ] based on inertial positioning]

ψd: azimuth (clockwise with north as a reference) [ rad ] based on inertial positioning]

V: vehicle speed [ m/sec ]

γ: yaw rate [ rad/sec ]

a: equatorial radius (6378137.0 [ m ])

Fe: global oblateness being 1/298.257223563)

Y in the numerical formula (8)dRepresents a vector obtained by time-differentiating the state vector relating to inertial positioning. In addition, g (y)dU) denotes a group ydU is a nonlinear function having input variables V and γ in a total, and represents that u is [ V γ ═ V γ -]T

In addition, N in the numerical formula (8) represents a prime radius, and M represents a meridional radius, which are defined by the following numerical formulae (9) and (10), respectively.

[ mathematical formula 9]

e:

[ mathematical formula 10]

The inertial positioning result can be obtained by integrating the corrected sensor value at every moment by substituting the sensor value into equation (8). As a method of integration, a method such as the longge-Kutta method (Runge-Kutta method) is often used. The coordinates such as latitude, longitude, and altitude of the inertial navigation are the coordinates of the navigation center of the vehicle.

Here, returning to the description of the flowchart of fig. 5, the process of step S114 is performed. That is, the observation value prediction unit 16 calculates a prediction observation value using information obtained by inertial positioning. The observation value prediction by the GNSS sensor 1 and the observation value prediction by the land object detection unit 3 will be described below.

< processing in observed value prediction section >

< prediction of observations for GNSS sensors >

The observation value obtained by the GNSS sensor 1 is coordinate information such as latitude, longitude, and altitude of the GNSS antenna 1 a. Hereinafter, the observation value of the GNSS sensor 1 is assumed to be (λ)mhm,ψm). On the other hand, these pieces of coordinate information can be obtained with respect to the inertial positioning result, but since the inertial positioning result is the coordinates of the navigation center of the vehicle, the observation value of the GNSS sensor 1 is predicted using the offset from the vehicle navigation center to the position of the GNSS antenna 1 a. That is, when the offset amount from the vehicle navigation center expressed in the vehicle navigation coordinate system to the GNSS antenna 1a is (Δ x, Δ y, Δ z), the observed value (λ) of the GNSS sensor 1 is predictedphp,ψp) Can be based on a coordinate transformation function c (y)dV) from the inertial positioning value y as in the following equation (11)ddhd,ψd) And the offset v (Δ x, Δ y, Δ z).

[ mathematical formula 11]

< prediction of observed value for land object detection section >

The observation values obtained by the land object detection unit 3 are the distance and the azimuth angle between the vehicle and the land object representative point. These observations can be predicted if the inertial positioning results and the absolute position coordinates of the ground object representative points are used. That is, the predicted value and the relative angle of the distance between the vehicle and the representative point (Xi, Yi, Zi) of the land object FE are respectively ρp,iAnd thetap,iIn this case, the values can be obtained by the following expressions (12) and (13), respectively.

[ mathematical formula 12]

[ mathematical formula 13]

θp,i=atan(e1/n1)-ψd …(13)

Here, (X) of the numerical formula (12)d,Yd,Zd) The ECEF coordinate system expresses lambda as the result of inertial positioningdhdThe value of (d) can be obtained by the following equation (14).

[ mathematical formula 14]

In addition, e of the numerical formula (13)iAnd niThe value of the land object representative point is expressed by the NED coordinate system and can be obtained by the following equation (15).

[ mathematical formula 15]

< processing in Filter >

Here, returning to the description of the flowchart of fig. 5, the processing of step S103 will be described. In filter 13, filter positioning calculation and estimation of an error of autonomous sensor 6 are performed using the actual observation value obtained in step S102 and the predicted observation value obtained in step S114 (step S103).

First, the state vector expressed by the following expression (16) is defined with the variables to be estimated as latitude, longitude, altitude, azimuth, vehicle speed scale factor, yaw rate scale factor, and yaw rate offset.

[ mathematical formula 16]

Set as scale factor s of vehicle speedvAnd scale factor s of yaw rateγMinute, true value V of vehicle speed from equations (5) and (6)tAnd true value of yaw rate γtThese can be approximated by the following numerical expressions (17) and (18), respectively.

[ mathematical formula 17]

Vt=(l-sv)V …(17)

[ mathematical formula 18]

γt=(l-sγ)γ-bγ …(18)

The scale factor s of the vehicle speed is expressed by the following numerical expressions (19), (20) and (21)vScaling factor s of yaw rateγAnd offset of yaw rate sensor bγThe dynamic model of (2). That is, the driving is performed in the 1-time markov process of predicting the next state from the current state.

[ math figure 19]

sv.=(-sv+wsv)/τ …(19)

Wsv: process noise of vehicle speed scale factor [ -]

τsv: vehicle speed scale factor model parameter value sec]

[ mathematical formula 20]

sγ.=(-sγ+w…(20)

W: process noise of yaw rate scale factor [ -]

τ: yaw rate scale factor model parameter value sec]

[ mathematical formula 21]

bγ.=(-bγ+w)/τ…(21)

W: yaw rate offset Process noise [ rad/sec ]]

τ: yaw rate offset model parameter value sec]

In the numerical expressions (19) to (21), svIs svTime differential of(s)γIs sγTime differential of (b)γIs bγTime differentiation of (d). Additionally, the process noise W of the vehicle speed scale factorsvIs the noise associated with the time transition of the vehicle speed scale factor, the process noise W of the yaw rate scale factorIs the noise associated with the time shift of the yaw rate scale factor, the yaw rate offset process noise WIs the noise associated with the time transition of the yaw rate offset.

When the expressions (19) to (21) are combined, the equation of state can be expressed by the following expression (22).

[ mathematical formula 22]

In the equation (22), x. represents a vector obtained by time-differentiating the state vector x. U is an input vector that can be expressed by the following expression (23).

[ mathematical formula 23]

u=[Vγ]T …(23)

By using equation (22) as a state equation, equation (11) as an observation equation by the GNSS sensor 1, and equations (12) and (13) as observation equations by the ground object detection, the state vector x is estimated, and the positioning operation and the error of the autonomous sensor 6 can be estimated.

Since the state equation of expression (22) and the observation equations of expressions (11) to (13) are nonlinear with respect to the state vector, it is necessary to apply nonlinear state estimation in order to estimate the error of the autonomous sensor 6 and the positioning operation. As a method of estimating the nonlinear state, a known method such as a particle filter called a particle filter or a sequential monte carlo method, an extended kalman filter, or the like can be applied. These methods are methods for estimating the most probable state in terms of probability, and are often used in the problem of state estimation.

A method using the extended kalman filter will be described below. In the kalman filter, the state vector is estimated on the assumption that the noise attached to the system follows a gaussian distribution, but the kalman filter is advantageous in terms of installation because the calculation load is smaller and the calculation circuit may be smaller than the particle filter.

< State estimation based on extended Kalman Filter >

When the prior guess value x of the state vectorbWhen taylor expansion is performed 1 time on the equation (22), it can be expressed by the following equation (24).

[ mathematical formula 24]

δx.=Faδx+w …(24)

In equation (24), w is process noise, and δ x is an error state vector that can be expressed by equation (25) below.

[ mathematical formula 25]

δx:=x-xb …(25)

In the expression (24), Fa can be expressed by the following expression (26).

[ mathematical formula 26]

The observation equation z by the GNSS sensor 1 is expressed as the following equation (27)GNSS

[ mathematical formula 27]

The observation equation z based on the land object i is expressed by the following equation (28)land,i

[ mathematical formula 28]

zland,i=[ρp,i θp,i]T …(28)

When N land objects can be observed simultaneously, an observation equation z obtained by summing up i 1 to N land object observation values is expressed as the following expression (29)land

[ mathematical formula 29]

zland=[ρp,1 θp,1 ρp,2 θp,2...ρp,N θp,N]T …(29)

The observation values observed in various situations are summed up to obtain a value, which is referred to as an output vector z. The output vector z has different contents in a situation where an observation value can be obtained. Hereinafter, z in each case will be described.

< case where only GNSS sensor can observe >

When the observation value by the GNSS sensor 1 is good because the ground object is not present in the vicinity of the vehicle, the output vector z is expressed by the following expression (30).

[ mathematical formula 30]

< case where only the land object detecting section can observe >

When the reliability of the observation value by the GNSS sensor 1 is low in urban areas or the like and a land object is detected, the output vector z is expressed by the following expression (31).

[ mathematical formula 31]

< case where observation is possible with GNSS sensor and land object detection unit >

When the observation value of the GNSS sensor 1 is good and the land object is detected, the output vector z is expressed by the following expression (32).

[ mathematical formula 32]

The output vector z can be expressed as a function of the state vectors x and u, and in the above situation, all can be written as the following expression (33).

[ mathematical formula 33]

z=h0(x,u) …(33)

By using the observation equation in a fused manner as in equation (32), it is possible to perform positioning calculation and error estimation of the autonomous sensor with higher reliability. Furthermore, by using the measurement equations in a fusion manner without switching them as in equation (32), it is possible to suppress a problem such as a positioning jump.

When the prior guess value x of the state vectorbThe following expressions (34) and (35) can be used to express taylor expansion of the surrounding logarithmic expression (33).

[ mathematical formula 34]

[ math figure 35]

δz=Hδx …(35)

In equation (34), the output vector z becomes an observation equation expressed by equation (33) shown previously.

In the equation (35), H is the observation equation, and 1 Taylor expansion is performed on the state vector x and the advance estimation value x is calculatedbAs a matrix for x substitution, the followingAnd (4) is represented by the following equation (36).

[ mathematical formula 36]

The matrix H can be obtained analytically or calculated using numerical differentiation.

When the expressions (24) and (35) are discretized in accordance with the sampling time Δ t of the autonomous sensor 6 and the discrete time is k, the discretized time is expressed by the following expressions (37) and (38), respectively.

[ mathematical formula 37]

δxk=Fδxk-1+wk …(37)

[ mathematical formula 38]

δzk=Hδxk+vk …(38)

In equations (37) and (38), F is the error state vector δ x associated with time kkRelative state transition matrix, with F ═ 1+ FaDt) in wkW · Δ t. v. ofkIs the sensor noise corresponding to each observation. Process noise w and sensor noise vkThe parameters of the kalman filter can be set using a measurement value or the like in advance.

By applying the processing algorithm of the kalman filter using equations (37) and (38), it is possible to obtain the estimated value δ x of the error state vector at the discrete time ke,k

< time development treatment >

The time development processing refers to processing performed every sampling time from the main sensor 6. Using inertial positioning results y at time kd,kAnd autonomous sensor error esensor,kThe estimated value x of the state vector at time k is expressed by the following equation (39)b,k

[ math figure 39]

The estimated value of the error state vector at time k is set to be δ xb,kSetting the error covariance matrix to Pk(n × n matrix) and the pre-error covariance matrix is Pb,k(n × n matrix), the estimated value δ x is estimated in advanceb,kAnd a prior error covariance matrix Pb,kThe time development processing is performed as the following numerical expression (40) and numerical expression (41), respectively.

[ mathematical formula 40]

δxb,k=Fδxb,k-1 …(40)

[ math figure 41]

Pb,k=FPk-lFT+Q …(41)

In the formula (41), Q is represented by the formulakAs a covariance matrix (n × n matrix) of the process noise of the diagonal component. The initial value of the error covariance matrix is required immediately after the power is turned on, but as the initial value, an arbitrary scalar value α of 0 or more and an identity matrix I of n × n are usedn×nAnd P is represented by the following numerical formula (42)k-1Etc. are frequently utilized. In addition, as δ xb,kUsing the initial value of (1), usingb,kAll elements of (2) are vectors of 0.

[ mathematical formula 42]

Pk-1=α·In×n …(42)

< Observation update processing >

At the time of obtaining the observation value by the external sensor, the observation update processing defined by the following expressions (43), (44), and (45) is performed.

[ math figure 43]

Gk=Pb,kHT(HPb,kHT+R)-1 …(43)

[ math figure 44]

δxe,k=δxb,k+Gk(δzk-Hδxb,k) …(44)

[ mathematical formula 45]

Pk=(In×n-GkH)Pb,k …(45)

In the numerical expressions (43) to (45), δ xe,kIs an estimated value of the error state vector, R is a covariance matrix (p × p matrix) of sensor noise, GkIs the kalman gain.

In addition, δ zkIs a reaction of zm,kLet z be the actual observed value at time kp,kA vector represented by the following equation (46) as a prediction observation value.

[ mathematical formula 46]

δzk=zm,k-zp,k …(46)

If this is done, the estimated value δ x of the error state vector at time k is obtainede,kSo state vector xkBy a speculative value xe,kCan be obtained as the following expression (47).

[ math figure 47]

xe,k=xb,k+δxe,k …(47

Here, P iskThe distribution of the difference between the true value and the estimated value of the state vector is shown, and if this value is used, the abnormal value of the external sensor can be determined. For example, the decimation is for PkThe elements of latitude and longitude of (a) can obtain an ellipse called an error ellipse by performing eigenvalue analysis, and the following discarding means is constituted: if the sensor value of the GNSS sensor 1 comes within the range of the error ellipse, the value is used as an observation value, and in the case of not coming, it is not used as an observation value.

Even when a particle filter is used, a similar rejection mechanism can be configured, and by rejecting an abnormal value, it is possible to estimate the reliability more.

The elements of the covariance matrix R of the sensor noise corresponding to the GNSs sensor 1 preferably vary according to the positioning state of the GNSs sensor 1, such as individual positioning, DGPS, Float solution, and Fix solution.

The elements of the covariance matrix R of the sensor noise corresponding to the land object observation value may differ for each land object type depending on the performance of the land object detection unit 3, and therefore preferably vary for each land object type.

Here, the processing in step S104 will be described returning to the description of the flowchart in fig. 5. The estimated value x of the state vector obtained in step S103e,kSet to the state vector xeThis is defined as the following equation (48).

[ math figure 48]

In the formula (48), λeheAnd psieRespectively, estimated values of latitude, longitude, altitude and azimuth, sve、sγeAnd bγeAre the vehicle speed scale factor, yaw rate scale factor, and yaw rate offset guess values.

When is set asAs the positioning result y output from the vehicle positioning device 20outThis is expressed by the following equation (49).

[ math figure 49]

yout=ye …(49)

In addition, autonomous sensor error esensorExpressed by the following equation (50), the input signal is input to the sensor correction unit 14.

[ mathematical formula 50]

esensor=[sve sγe bγe]T …(50)

By mounting the vehicle positioning device 20 having the above-described configuration on the vehicle, it is possible to maintain positioning accuracy even in an urban area and a suburban area where multipath is likely to occur, and it is possible to expand the situation in which positioning can be continued.

In the present embodiment, the result of positioning calculation by the GNSS sensor 1 such as latitude, longitude, altitude, and azimuth is used based on the observation value of the GNSS sensor 1, but the GNSS sensor 1 may output raw data such as an analog distance observation value, a doppler observation value, and a carrier phase observation value, and in this case, any one or a plurality of or all of them may be used as the observation value. In this case, it is necessary to add a drift of the receiver time of the GNSS sensor 1 to the state variable and perform error estimation on the receiver time, but even when the number of visible satellites is small (for example, only 1 satellite), an observation value by the GNSS sensor 1 can be generated. Such a technique is called close coupling, and can perform positioning with higher accuracy by combining with the observation value of the land object detection unit 3 as in the present embodiment.

In the present embodiment, although a model that moves substantially in a plane is used as the model of inertial positioning, when 3-axis acceleration and angular velocity are used as the autonomous sensor 6, an inertial positioning model in which the tilt angle and pitch angle of the vehicle vary can be constructed, and more accurate positioning and attitude estimation can be performed.

In the present embodiment, the relative distance and the relative angle are used as the observed values as the land object observed values, but the output of the relative speed with respect to the land object may be considered in some cases in accordance with the processing of the land object detection. In this case, since the relative velocity is added to the observed value of the land object, it is possible to perform positioning with higher accuracy.

In addition, although the present embodiment shows the estimation method based on the extended kalman filter, when the noise accompanying the system is not gaussian distributed, the particle filter is applied, and the positioning and the estimation of the sensor error can be performed with high accuracy.

Further, only the positioning result is used as the output of the vehicle positioning device 20, but the corrected sensor value may be output.

< embodiment 2>

In embodiment 1 described above, the positioning is performed using the observed value of the land object, but the positioning may be performed using the road alignment information in an urban area and suburbs where multipath is likely to occur, and the positioning calculation and the estimation of the sensor correction amount may be performed.

Fig. 7 is a functional block diagram showing the configuration of a vehicle positioning device 20A according to embodiment 2 of the present invention. In fig. 7, the same components as those of the vehicle positioning device 20 described with reference to fig. 1 are denoted by the same reference numerals, and redundant description thereof is omitted.

As shown in fig. 7, the vehicle positioning device 20A is configured to receive road alignment data from the road alignment detecting unit 2 (the 4 th sensor) instead of the land object detecting unit 3 of the vehicle positioning device 20 shown in fig. 1.

The road line shape detection unit 2 has a function of detecting a road line shape using a camera or the like attached to the vehicle and outputting a detection result. In general, the camera is attached to the front of the vehicle, detects left and right lane lines in front of the vehicle by image processing, and inputs the lane line data to the observation value processing unit 12 as road line data.

The road alignment calculating unit 11 has the following functions: the road alignment in the vicinity of the vehicle is calculated using the road information obtained from the road information storage unit 5, and is output to the observation value prediction unit 16 as road alignment information.

< action >

Next, the overall processing flow of the vehicle positioning device 20A will be described with reference to the flowchart shown in fig. 8. In fig. 8, the same processing as in the flowchart described with reference to fig. 5 is denoted by the same reference numerals, except that steps S121 and S122 are added, and redundant description is omitted.

When the vehicle positioning device 20A starts positioning, first, an initial value of inertial positioning and a current inertial positioning result used in the observation value prediction unit 16 are acquired (step S100).

Next, the vehicle positioning device 20A determines whether or not data is obtained from an external sensor such as the GNSS sensor 1 and the road alignment detection unit 2 (step S101). Although the external sensors have different detection targets, the process proceeds to step S102 when there are 1 sensors that obtain sensor values (yes), and proceeds to step S131 when there are no sensors (no).

When the data from the external sensor cannot be obtained, the vehicle positioning device 20A outputs the inertial positioning result obtained in the inertial positioning calculation of step S113 as the positioning result of the vehicle positioning device 20A in step S131.

In step S102, the observation value processing unit 12 processes the sensor value obtained by the external sensor so as to be usable by the filter 13 in the next step S103, and outputs the processed sensor value as an actual observation value.

Here, the processing for the observation value obtained by the road alignment detection unit 2 in the observation value processing unit 12 will be described. Note that the processing for the observation value obtained by the GNSS sensor 1 is the same as that in embodiment 1, and therefore, the description thereof is omitted.

< processing for observed value in road alignment detection section >

The road alignment detection unit 2 can detect a lane line in the left-right direction of the vehicle from a camera mounted in front of the vehicle. In the road alignment detection unit 2, coefficients of a 3-degree function are output as detection results regarding the left and right dash lines, which are expressed by the following expressions (51) and (52) in the vehicle coordinate system, respectively.

[ mathematical formula 51]

yb=C3r·xb3+C2r·xb2+C1r·xb1+C0r …(51)

[ math 52]

yb=C3l·xb3+C2l·xb2+C1l·xb1+C0l …(52)

In the present embodiment, the road alignment at the center of the lane is used as the representative value. Therefore, the observation value processing unit 12 performs processing expressed by the following equation (53) on the road alignment calculation result.

[ mathematical formula 53]

yb=C3c·xb3+C2c·xb3+C1c·xb1+C0c …(53)

In the formula (53), C3c=(C3r+C3l)/2,C2c=(C2r+C2l)/2,C1c=(C1r+C1l)/2,C0c=(C0r+C0l)/2。

In the present embodiment, the lateral position deviation y at the point where xb is 0 is set as the observation value detected by the road alignment detection unit 2mDeviation angle thetamCurvature kappam. Then, the road line shape at the center of the lane is obtained by the following expressions (54), (55), and (56), respectively.

[ math formula 54]

ym=C0c …(54)

[ math figure 55]

θm=atan(C1c) …(55)

[ math figure 56]

κm=2C2c …(56)

FIG. 9 is a diagram illustrating the alignment of the road, showing the lateral position deviation ymDeviation angle thetamCurvature kappamThe vehicle navigation coordinate system and the NED coordinate system are used for representing the position of the vehicle OV. In addition, a right area-drawing line represented by the numerical expression (51) and a left area-drawing line represented by the numerical expression (52) are schematically shown.

< processing in road alignment calculating section >

Here, returning to the description of the flowchart of fig. 8, step S121 will be described. In parallel with steps S101 and S111, the vehicle positioning device 20A acquires road information around the vehicle from the road information storage unit 5 (step S121). As shown in fig. 3, the road information storage unit 5 stores information on the actual presence of a land object and a virtual land object on the road in the section in which the map data is sorted, and stores information on a lane link, a node on the lane link, a diagonal region, and the like as information on the virtual land object. In addition, information such as the lane width, the longitudinal gradient, and the lateral gradient of the lane may be included.

After acquiring the road information in step S121, the process proceeds to step S122. In step S122, the road alignment calculating unit 11 calculates road alignment information around the vehicle, which is used by the observation value predicting unit 16 after the calculation. In the present embodiment, a method will be described below in which the absolute position coordinates of the center of the lane are held as map nodes in the road information storage unit 5, and the road alignment is calculated using the information.

Fig. 10 is a flowchart illustrating the processing in step S122. The processing of step S122 will be described below with reference to fig. 10.

First, in step S201, map node data is acquired from the road information storage unit 5. Here, as the map node data, the absolute position coordinates of the center of the lane where the vehicle is located are obtained as latitude, longitude, and altitude. Hereinafter, the map node data of a plurality of points in the vicinity of the vehicle is referred to as a map point group.

Next, in step S202, function approximation is performed on the map point group obtained in step S201. Specifically, first, a map point group is expressed as a line passing through each node from a node at the start position to a node at the end position, and a road alignment is obtained from the obtained line. That is, the road point group is expressed as a road line shape using a line length parameter s such that s at the start position node is 0 and s at the end point node is l. Since s at each node position can be obtained from the latitude, longitude, and altitude of the map node data, the absolute position coordinates such as the latitude, longitude, and altitude at each node of the map point group can be expressed as a line using s by interpolation processing such as function approximation. Examples of the method of approximating a function include a method using a bezier curve, a spline curve, or the like. In this embodiment, a method of approximation by a 3 rd order polynomial will be described.

Fig. 11 is a conceptual diagram showing a road alignment of a map point group and a function approximation. When the absolute position coordinates of the road center, which is the map point group approximated by the function, are in ECEF coordinates (X, Y, Z), the function l is usedX(s)、lY(s)、lZWhen(s) is expressed, it can be expressed by a polynomial of degree 3 as in the following expressions (57), (58) and (59).

[ math formula 57]

lX(s):X=C3xs3+C2Xs2+C1Xs+C0X …(57)

[ math figure 58]

lY(s):Y=C3Ys3+C2Ys2+C1Ys+C0Y …(58)

[ mathematical formula 59]

lZ(s):Z=C3Zs3+C2Zs2+C1Zs+C0Z …(59)

As another method of calculating the road alignment, a method of calculating the road alignment based on the traffic line link and the node information, for example, the technique disclosed in japanese patent application laid-open No. 2005-313803 can be used.

Next, in step S203, the 3-degree approximation coefficient is output as the calculation result as the road line shape obtained in step S202.

The information of the road alignment calculated in step S122 is supplied to observation value predicting unit 16, and the process of step S114 is performed. That is, in step S114, the prediction observation value is calculated using the inertial positioning result obtained by the inertial positioning unit 15 and the road alignment information obtained by the road alignment calculation unit 11. Since the estimation of the observation value by the GNSS sensor 1 is the same as that in embodiment 1, the estimation of the observation value by the road alignment detection unit 2 will be described below.

< processing in observed value prediction section >

< prediction of observed value for road alignment detection section >

The observed values obtained by the road line shape detection unit 2 are lateral position deviation, deviation angle, and curvature. The lateral position deviation, deviation angle, and curvature can be predicted from the inertial positioning result and the road line shape around the vehicle obtained by the road line shape calculation unit 11. Specifically, as shown in fig. 9, coordinates are usedThe road line shape (l) obtained by the road line shape detection unit 2 is convertedx(s),lY(s),lz(s)) into a road alignment (l) expressed in a vehicle navigation coordinate systemnx(s),lnY(s),lnZ(s)). I.e. expressed as xb ═ lnx(s),yb=lnY(s),zb=lnz(s). Further, s, where xb is 0, is analytically or numerically calculated. When s at this time is sc, the predicted lateral position deviation, deviation angle, and curvature are obtained as the following expressions (60), (61), and (62), respectively. These can be obtained analytically or by numerical calculation.

[ mathematical formula 60]

yp=lnY|s=sc

…(60)

[ mathematical formula 61]

[ mathematical formula 62]

< processing in Filter >

Here, returning to the description of the flowchart of fig. 8, the processing of step S103 will be described. In the filter 13, filter positioning calculation and estimation of an error of the autonomous sensor 6 are performed using the actual observation value obtained in step S102 and the predicted observation value obtained in step S114. The point different from embodiment 1 is an observation equation.

The road alignment-based observation equation z is expressed by the following equation (63)road

[ math figure 63]

zroad=[yp θp κp]T …(63)

< case where only GNSS sensor can observe >

When the road alignment cannot be detected and the observed value by the GNSS sensor 1 is good, the output vector z is expressed by the following expression (64).

[ mathematical formula 64]

< case where only the road alignment detection unit can observe >

When the reliability of the observation value by the GNSS sensor 1 is low in an urban area or the like and when the road alignment is detected, the output vector z is expressed by the following expression (65).

[ math 65]

< case where observation is possible with GNSS sensor and road alignment detection unit >

When the observation value of the GNSS sensor 1 is good and the road alignment is detected, the output vector z is expressed by the following expression (66).

[ mathematical formula 66]

In each case, if the same processing as that of embodiment 1 can be performed, it is possible to estimate the positioning calculation and the correction amount of the autonomous sensor.

In the above, as the observed value of the road alignment, a point where xb is 0 is used, but any point may be selected as xb. Further, not only one point but a plurality of points may be used as xb 0. For example, a point where xb is 0 and a point where xb is 10 may be used at the same time. In this case, the number of observation values regarding the road alignment increases, and therefore, more accurate positioning can be performed.

Further, the observed value based on the road line shape is set as the lateral position deviation, the deviation angle, and the curvature, but when the observed value of the road line shape is curve-approximated a higher number of times, a curvature change or the like can be added as the observed value. In this case, the number of observation values increases, and therefore, positioning with higher accuracy can be performed.

As described above, the road alignment detection unit 2 and the road information storage unit 5 are used to enable the positioning calculation and the estimation of the sensor correction amount. With such a configuration, as in the case of using a land object, the situation in which the positioning can be continued can be expanded.

< embodiment 3>

The above-described embodiments 1 and 2 may be combined to perform positioning using the observed value of the land object and the road linearity information.

< device Structure >

Fig. 12 is a functional block diagram showing the structure of a vehicle positioning device 20B according to embodiment 3 of the present invention. In fig. 12, the same components as those of the vehicle positioning devices 20 and 20A described with reference to fig. 1 and 7 are denoted by the same reference numerals, and redundant description thereof is omitted.

As shown in fig. 12, the vehicle positioning device 20B is configured to receive road alignment data from the road alignment detecting unit 2 and to receive data on the relative relationship of the land object information from the land object detecting unit 3.

< action >

Since the overall process flow of the vehicle positioning device 20B is the same as that of fig. 8, only the differences from embodiments 1 and 2 will be described with reference to fig. 8.

When the vehicle positioning device 20B starts positioning, first, the initial value of inertial positioning and the current inertial positioning result used in the observation value prediction unit 16 are acquired (step S100).

Next, the vehicle positioning device 20B determines whether data is obtained from an external sensor such as the GNSS sensor 1, the road alignment detection unit 2, and the land object detection unit 3 (step S101). Although the external sensors have different detection targets, the process proceeds to step S102 when there are 1 sensors that obtain sensor values (yes), and proceeds to step S131 when there are no sensors (no).

When the data from the external sensor cannot be obtained, the vehicle positioning device 20B outputs the inertial positioning result obtained in the inertial positioning calculation of step S113 as the positioning result of the vehicle positioning device 20B in step S131.

In step S102, the observation value processing unit 12 processes the sensor value obtained by the external sensor so as to be usable by the filter 13 in the next step S103, and outputs the processed sensor value as an actual observation value.

Here, the processing of the observation values obtained by the GNSS sensor 1, the road alignment detector 2, and the land object detector 3 in the observation value processor 12 is the same as that in embodiments 1 and 2, and therefore, the description thereof is omitted.

< processing in Filter >

Here, returning to the description of the flowchart of fig. 8, the processing of step S103 will be described. In the filter 13, the actual observation value obtained in step S102 and the predicted observation value obtained in step S114 are used, and the filter performs positioning calculation and estimation of an error of the autonomous sensor 6. The point different from embodiments 1 and 2 lies in the observation equation. The observation equation in the filter 13 can be expressed as follows according to each situation.

< case where only GNSS sensor can observe >

When the road alignment cannot be detected because the ground object is not present around the vehicle, and the observation value is good only by the GNSS sensor 1, the output vector z is expressed by the following expression (67).

[ math figure 67]

< case where only the land object detecting section can observe >

In a case where the observed value from the GNSS sensor 1 is low in reliability such as in an urban area and cannot detect the road alignment, and only a land object can be detected, the output vector z is expressed by the following expression (68).

[ mathematical formula 68]

< case where only the road alignment detection unit can observe >

When a ground object is not present in the vicinity of the vehicle, the reliability of the observed value by the GNSS sensor 1 is low in an urban area or the like, and only the road alignment is detected, the output vector z is expressed by the following expression (69).

[ mathematical formula 69]

< case where observation is possible with GNSS sensor and land object detection unit >

When the road alignment cannot be detected, the observation value of the GNSS sensor 1 is good, and a land object is detected, the output vector z is expressed by the following expression (70).

[ mathematical formula 70]

< case where observation is possible with GNSS sensor and road alignment detection unit >

When the observation value of the GNSS sensor 1 is good and the road alignment is detected without the presence of the ground object in the vicinity of the vehicle, the output vector z is expressed by the following expression (71).

[ math figure 71]

< case where observation is possible with the ground object detection unit and the road alignment detection unit >

When the reliability of the observation value by the GNSS sensor 1 is low in an urban area or the like and when a road alignment or a land object is detected, the output vector z is expressed by the following expression (72).

[ mathematical formula 72]

< case where observation is possible with GNSS sensor, road alignment detection unit, and land object detection unit >

When the observation value of the GNSS sensor 1 is good and the road alignment and the land object are detected, the output vector z is expressed by the following expression (73).

[ math figure 73]

By using the observation equation corresponding to each situation described above, the filter 13 can perform filter positioning calculation and estimation of the error of the autonomous sensor 6.

As described above, by using the road line shape detection unit 2, the land object detection unit 3, the land object information storage unit 4, and the road information storage unit 5, it is possible to perform positioning calculation and estimation of the sensor correction amount. With this configuration, it is possible to realize a vehicle positioning device that can expand the situation of continuous positioning and suppress positioning jumps and the like.

< hardware Structure >

The respective components of the vehicle positioning devices 20, 20A, and 20B according to embodiments 1 to 3 described above can be configured using a computer, and can be realized by executing a program on the computer. In other words, the vehicle positioning devices 20 to 20B are realized by, for example, a processing circuit 50 shown in fig. 13. The processing circuit 50 is implemented by a Processor such as a CPU or a DSP (Digital Signal Processor) and executes a program stored in a storage device, thereby realizing the functions of each part.

In addition, dedicated hardware may be applied as the processing circuit 50. When the processing Circuit 50 is dedicated hardware, the processing Circuit 50 may be a single Circuit, a composite Circuit, a programmed processor, a parallel programmed processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or a Circuit in which these are combined, for example.

The vehicle positioning devices 20 to 20B may be realized by a single processing circuit for each function of the constituent elements, or may be realized by 1 processing circuit for each function in a group.

Fig. 14 shows a hardware configuration in the case where the processing circuit 50 is configured using a processor. In this case, the functions of the respective parts of the vehicle positioning devices 20 to 20B are realized by a combination with software or the like (software, firmware, or both). Software and the like are described as programs and stored in the memory 52. The processor 51 functioning as the processing circuit 50 reads and executes a program stored in the memory 52 (storage device), thereby realizing the functions of each part. That is, the program can be said to cause the computer to execute the procedure and method of the operation of the components of the vehicle positioning devices 20 to 20B.

Here, the Memory 52 may be a nonvolatile or volatile semiconductor Memory such as a RAM, a ROM, a flash Memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory), or the like, a HDD (Hard Disk Drive), a magnetic Disk, a flexible Disk, an optical Disk, a compact Disk, a mini Disk, a DVD (Digital Versatile Disk), a Drive device thereof, or any storage medium used in the future.

The above description has been given of the configuration in which the functions of the respective constituent elements of the vehicle positioning devices 20 to 20B are realized by either hardware, software, or the like. However, the present invention is not limited to this, and some of the components of the vehicle positioning devices 20 to 20B may be realized by dedicated hardware, and the other components may be realized by software or the like. For example, some of the components may be realized by the processing circuit 50 as dedicated hardware, and some of the components may be realized by the processing circuit 50 as the processor 51 reading and executing a program stored in the memory 52.

As described above, the vehicle positioning devices 20 to 20B can realize the above-described functions by hardware, software, or the like, or a combination of these.

The present invention has been described in detail, but the above description is illustrative in all aspects, and the present invention is not limited thereto. It is understood that numerous modifications, not illustrated, can be devised without departing from the scope of the invention.

In addition, the present invention can be freely combined with each embodiment or appropriately modified or omitted from each embodiment within the scope of the invention.

39页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于驾驶疲劳地图的个性化路线选择

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!