Indoor autonomous navigation AGV (automatic guided vehicle) motion track fusion method based on LSTM (least squares metric model)

文档序号:1671162 发布日期:2019-12-31 浏览:6次 中文

阅读说明:本技术 基于lstm模型的室内自主导航agv运动轨迹融合方法 (Indoor autonomous navigation AGV (automatic guided vehicle) motion track fusion method based on LSTM (least squares metric model) ) 是由 全燕鸣 黄令苇 马磊 何一明 全思博 于 2019-10-28 设计创作,主要内容包括:本发明公开了一种基于LSTM模型的室内自主导航AGV运动轨迹融合方法,该方法首先采集AGV在一次运动的过程中里程计、惯性测量单元、经卡尔曼滤波融合里程计和惯性测量单元数据后的数据,经处理后得到AGV角度数据,将角度数据进行分组得到训练集和测试集,利用训练集对搭建的LSTM网络进行训练,测试集测试训练结果;将经过训练的模型部署上线后,向模型实时输入通过里程计和惯性测量单元得到的角度数据,得到实时的AGV角度,结合该角度和里程计计算AGV的运动轨迹。本发明在保证精度的基础上,不需要确定噪声的统计特性,不易受到外界环境影响,节约了时间成本,且不需要在输入传感器数据时进行传感器数据的同步。(The invention discloses an indoor autonomous navigation AGV movement track fusion method based on an LSTM model, which comprises the steps of firstly collecting data of a mileometer, an inertia measurement unit and a Kalman filtering fusion mileometer and the inertia measurement unit of an AGV in the process of one-time movement, processing the data to obtain AGV angle data, grouping the angle data to obtain a training set and a test set, training a built LSTM network by using the training set, and testing the training result by using the test set; after the trained model is deployed on line, angle data obtained through the odometer and the inertia measurement unit are input into the model in real time to obtain a real-time AGV angle, and the motion trail of the AGV is calculated by combining the angle and the odometer. On the basis of ensuring the precision, the invention does not need to determine the statistical characteristic of the noise, is not easily influenced by the external environment, saves the time cost, and does not need to synchronize the sensor data when inputting the sensor data.)

1. An indoor autonomous navigation AGV motion trail fusion method based on an LSTM model is characterized by comprising the following steps:

s1, collecting data:

firstly, setting a reference coordinate system, taking an AGV starting position as an origin, taking the direction of the head of the starting position as the positive direction of an x axis, the left direction of the head of the starting position as the positive direction of a y axis, the direction vertical to the ground upwards as the positive direction of a z axis, and the anticlockwise rotation of the AGV is positive, controlling the AGV to freely move within an open range at a certain linear speed and angular speed, ensuring that the AGV rotates at least 360 degrees around the z axis when finishing moving, collecting an AGV yaw angle obtained through odom measurement of a milemeter, namely the angle of the AGV rotating around the z axis, hereinafter referred to as yaw for short, and calling the yaw obtained through the odom measurement as odom _ yaw; acquiring the raw measured by the inertial measurement unit imu, and calling the raw measured by imu as imu _ raw; simultaneously fusing the odom and the imu by a Kalman filtering fusion method to obtain a fusion odometer, which is called ekf _ combined hereinafter, collecting the yaw obtained by ekf _ combined measurement as an expected value of a fusion result, and calling the yaw obtained by ekf _ combined measurement as ekf _ combined _ yaw;

s2, making a data set and dividing a training set and a testing set:

respectively storing the odom _ yaw, imu _ yaw and ekf _ combined _ yaw into 3 arrays to obtain 3 groups of data, wherein the 1 st group is the odom _ yaw; the group 2 is imu _ yaw, the group 3 is ekf _ combined _ yaw, the data of the same position of the groups 1 and 2 and the data of the next position corresponding to the group 3 are taken as 1 batch according to the time sequence, the first 70-85% of the total batch is taken as a training set, and the rest is a test set;

s3, establishing an LSTM model:

the adopted LSTM model comprises 1 input layer, 2 hidden layers, 1 Dropout layer and 1 full-connection layer, training data sequentially pass through the input layer, the 1 st hidden layer, the 2 nd hidden layer, the Dropout layer and the full-connection layer, and finally fused data are output to obtain LSTM _ combined _ yaw;

s4, training and saving the LSTM model:

the activation function adopted by each hidden layer is Relu, the optimizer adopts an Adam method, and the loss function adopts Mean Square Error (MSE); the initial learning rate is 0.001, training data are input to train the LSTM model, and the LSTM model is stored to a specified path after training is finished;

s5, enabling the deployment model to be online, and fusing data in real time;

respectively establishing a server and a client by utilizing a gPC framework, deploying the trained model to a line by the server, receiving the odom _ yaw and the imu _ yaw to be fused as input by the client, and outputting lstm _ combined _ yaw;

and S6, calculating the motion trail of the AGV according to the AGV displacement obtained by the real-time measurement of the odom and the LSTM _ combined _ yaw output by the LSTM model in real time.

2. The method for fusing motion trajectories of AGV' S autonomous navigation based on LSTM model in room of claim 1, wherein the step S1 is implemented as follows:

acquiring the odom _ yaw, imu _ yaw and ekf _ combined _ yaw at the same moment, converting the quaternion into yaw because the data directly output by the odom, imu and ekf _ combined is a quaternion, wherein the quaternion is a method for representing the rotation of an object in a three-dimensional space and comprises four parameters: x, y, z, w; x represents rotation of the object about an x-axis, y represents rotation of the object about a y-axis, and z represents rotation of the object about a z-axis, with the constraints being as follows:

x2+y2+z2+w2=1

the quaternion is converted to yaw by the following calculation:

yaw=arctan{2(w×z+x×y)÷[1-2(y2+z2)]}

wherein x, y, z, w are 4 parameters of the quaternion measured by the sensors, i.e., x represents the rotation of the AGV about the x-axis, y represents the rotation of the AGV about the y-axis, and z represents the rotation of the AGV about the z-axis.

3. The method for fusing motion trajectories of AGV' S autonomous navigation based on LSTM model in room of claim 1, wherein the specific composition of a batch in step S2 is described as follows:

a batch of data contains 1 odom _ yaw value, 1 imu _ yaw value, and 1 ekf _ combined _ yaw value, expressed as:

ODOM_YAWi={odom_yawt}

IMU_YAWi={imu_yawt}

EKF_COMBINED_YAWi={ekf_combined_yawt+Δt}

wherein t is the time when the first data in the group of data starts to be collected, Δ t is the collection interval, ODOM _ YAWiIs the odom _ YAW, IMU _ YAW in the ith lotiIs imu _ YAW, EKF _ COMMUNICED _ YAW in the ith batchiEkf _ combined _ yaw in the ith batch.

4. The method of claim 1, wherein the input layer of the LSTM model established in step S3 includes 2 neurons, and respectively inputs the from _ yaw and imu _ yaw of the previous time; the output layer only comprises 1 neuron and outputs lstm _ combined _ yaw; each hidden layer comprises 128 LSTM units, each LSTM unit comprises a forgetting gate, an input gate and an output gate besides the neuron, the forgetting gate determines the information quantity discarded from the neuron, the input gate determines the information quantity stored in the neuron, and the output gate determines the information quantity output from the neuron;

the key formula for the LSTM cell is as follows:

ft=σ(Wf·[ht-1,xt]+bf)

it=σ(Wi·[ht-1,xt]+bi)

Ct0=tanh(Wc·[ht-1,xt]+bc)

Ct=ft×Ct-1+it×Ct0

ot=σ(Wo·[ht-1,xt]+bo)

ht=ot×tanh(Ct)

wherein f ist,it,Ct0,Ct,ot,ht,ht-1Respectively representing a forgetting threshold, an input threshold, a neuron state at the previous moment, a current neuron state, an output threshold, the output of the current neuron and the output of the neuron at the previous moment; wf,Wi,Wc,WoRespectively representing weight matrixes of a forgetting gate, an input gate, a neuron and an output gate; bf,bi,bc,boRespectively representing the offset vectors of a forgetting gate, an input gate, a neuron and an output gate; tan h is an activation function; σ is a sigmod function.

5. The method for fusing motion trajectories of AGV' S autonomous navigation based on LSTM model of claim 1, wherein in step S4, the hidden layer activation function Relu formula:

Relu(x)=max(0,x)

the loss function is the mean square error, which is calculated as:

Figure FDA0002250687030000041

wherein, yiIs an input training value, y'iThe predicted value of LSTM output is n, the total number of data batches is n, 100 training rounds are performed, each training round comprises 72 batches, a Google open-source keras deep learning framework is adopted for training, and the model is stored to the designated path after the training is completed.

6. The method for fusing motion trajectories of AGV' S autonomous navigation based on LSTM model in room of claim 1, wherein the step S5 is implemented as follows:

the server deploys the stored LSTM model on line, waits for the client to send a request to the server, controls the AGV to walk in a circle, simultaneously processes data measured by the odom and the imu in real time, and sends the data to the server through the client after the odom _ yaw and the imu _ yaw are obtained; and the server side gives a response after receiving the client side request and returns an output result lstm _ combined _ yaw.

7. The method for fusing an indoor autonomous navigation AGV movement track based on an LSTM model according to claim 1, wherein the process of calculating the AGV movement track in step S6 is as follows:

firstly, the displacement measured by the odom is calculated, and the calculation formula is as follows:

Δx=odomX1-odomX0

Δy=odomY1-odomY0

Figure FDA0002250687030000051

wherein, the odomX1、odomY1Indicating the measured position of the vehicle, odom, at the present momentX0、odomY0The method comprises the steps of representing the position of a trolley measured by the odom at the previous moment, and respectively representing the displacement of the AGV in the x direction, the displacement of the AGV in the y direction and the linear distance between two moments measured by the odom;

and combining the lstm _ combined _ yaw to obtain the position of the AGV in the coordinate system after fusion, wherein the calculation method comprises the following steps:

ΔX=Δs×cos(lstm_combined_yaw)

ΔY=Δs×sin(lstm_combined_yaw)

lstm_combined_x=lstm_combined_x+ΔX

lstm_combined_y=lstm_combined_y+ΔY

Δ X and Δ Y respectively represent displacement amounts of the AGV in the X direction and the Y direction when the AGV moves at an angle of lstm _ combined _ yaw, lstm _ combined _ X and lstm _ combined _ Y respectively represent X and Y coordinates of the AGV in a coordinate system after fusion, and the lstm _ combined _ X and the lstm _ combined _ Y are initialized to 0 when the AGV is started.

Technical Field

The invention relates to the technical field of indoor AGV information perception and data fusion, in particular to an indoor autonomous navigation AGV motion track fusion method based on an LSTM model.

Background

For an automatic Guided vehicle capable of realizing indoor autonomous navigation, hereinafter referred to as an indoor autonomous navigation agv (automated Guided vehicle), it is important to ensure that it can maintain a long-term normal working state, and it is most important to ensure that it can accurately position itself. However, it is difficult to accurately estimate the AGV state by using only its own internal odometer, which estimates the AGV position by the number of wheel revolutions, but the odometer has a certain deviation due to wheel slip and accumulated error. Therefore, a plurality of sensors are needed to cooperate with each other to achieve accurate estimation of the position of the AGV itself, wherein the sensor used very often is an inertial measurement unit.

The data fusion method mainly comprises a weighted average method, a Kalman filtering method, a multi-Bayesian estimation method, a D-S evidence reasoning method, a generative rule method, a fuzzy logic reasoning method, an artificial neural network method and the like. The Kalman filtering method can be used for any dynamic system containing uncertain information to make a foundation prediction on the next state of the system, so people often use the Kalman filtering method to perform data fusion on information acquired from a sensor at present to estimate the attitude and the position of the AGV.

Although the fusion effect using the kalman filtering method is relatively accurate, it still has a certain limitation. When the method is used, the statistical characteristic of the noise needs to be determined in advance, and when the method faces a complex and changeable environment, the statistical characteristic of the noise is easily influenced by the external environment to change. Therefore, when equipment is replaced or the external environment is changed, the statistical characteristics of the noise are often determined through experiments again, which occupies a large amount of time cost and is disadvantageous to scientific research or actual use of products.

Most of traditional artificial neural network fusion methods are based on shallow network models such as BP neural network and SOFM, and the models are prone to overfitting, model training falls into local minimum, convergence speed is too slow, and the like, so that algorithm efficiency is reduced, feature extraction and classification capabilities are weakened, and high-dimensional data cannot be effectively processed. However, with the rapid development of the artificial intelligence algorithm in recent years, the artificial neural network method reenters the visual field of people, the statistical characteristic of noise can be not required to be determined in advance by using the method, the realization process is simple, and the adaptability of the system to different environments is improved.

Disclosure of Invention

The invention aims to reduce the time cost and improve the adaptability of a system to the environment, and provides an indoor autonomous navigation AGV motion track fusion method based on an LSTM model.

The purpose of the invention can be achieved by adopting the following technical scheme:

an indoor autonomous navigation AGV movement track fusion method based on an LSTM model comprises the following steps:

s1, collecting data:

firstly, a reference coordinate system is set, the AGV starting position is taken as an original point, the direction of the head of the starting position is the positive direction of an x axis, the left direction of the head of the starting position is the positive direction of a y axis, the upward direction vertical to the ground is the positive direction of a z axis, and the AGV rotates anticlockwise to be positive. The AGV is controlled to freely move within an open range at a certain linear speed and angular speed, and when the AGV finishes moving, the AGV is ensured to rotate at least 360 degrees around the z axis. Collecting an AGV yaw angle (i.e. an angle of rotation of the AGV around a z axis, hereinafter referred to as yaw) measured by an odometer (hereinafter referred to as odom), and referring the yaw measured by the odom to be odom _ yaw; acquiring the yaw measured by an inertial measurement unit (hereinafter referred to as imu), and designating the yaw measured by imu as imu _ yaw; simultaneously fusing the odom and the imu by a Kalman filtering fusion method to obtain a fused odometer (hereinafter referred to as ekf _ combined), collecting the yaw obtained by ekf _ combined measurement as an expected value of a fusion result, and calling the yaw obtained by ekf _ combined measurement as ekf _ combined _ yaw;

s2, making a data set and dividing a training set and a testing set:

respectively storing the odom _ yaw, imu _ yaw and ekf _ combined _ yaw into 3 arrays to obtain 3 groups of data, wherein the 1 st group is the odom _ yaw; group 2 is imu _ yaw and group 3 is ekf _ combined _ yaw. According to the time sequence, taking the data of the same position of the 1 st group and the 2 nd group and the data of the next position corresponding to the 3 rd group as 1 batch, taking the first 70-85% of the total batches as a training set, and taking the rest as a test set;

s3, establishing an LSTM model:

the adopted LSTM model comprises 1 input layer, 2 hidden layers, 1 Dropout layer and 1 full-connection layer, training data sequentially pass through the input layer, the 1 st hidden layer, the 2 nd hidden layer, the Dropout layer and the full-connection layer, and finally fused data are output to obtain LSTM _ combined _ yaw;

s4, training and saving the LSTM model:

the activation function adopted by each hidden layer is Relu, the optimizer adopts an Adam method, and the loss function adopts Mean Square Error (MSE); the initial learning rate is 0.001, training data are input to train the LSTM model, and the LSTM model is stored to a specified path after training is finished;

s5, enabling the deployment model to be online, and realizing real-time data fusion:

respectively establishing a server and a client by utilizing a gPC framework, deploying the trained model to a line by the server, receiving the odom _ yaw and the imu _ yaw to be fused as input by the client, and outputting lstm _ combined _ yaw;

and S6, calculating the motion trail of the AGV according to the AGV displacement obtained by the real-time measurement of the odom and the LSTM _ combined _ yaw output by the LSTM model in real time.

Further, the implementation process of step S1 is as follows:

in order to ensure the quality of the data set, the odom _ yaw, imu _ yaw, ekf _ combined _ yaw at the same time are acquired, and since the data directly output by the odom, imu, ekf _ combined is a quaternion, the quaternion is required to be converted into yaw, and the quaternion is a method for representing the rotation of an object in a three-dimensional space and comprises four parameters: x, y, z, w; x represents rotation of the object about an x-axis, y represents rotation of the object about a y-axis, and z represents rotation of the object about a z-axis, with the constraints being as follows:

x2+y2+z2+w2=1

the quaternion is converted to yaw by the following calculation:

yaw=arctan{2(w×z+x×y)÷[1-2(y2+z2)]}

wherein x, y, z, w are 4 parameters of the quaternion measured by the sensors, i.e., x represents the rotation of the AGV about the x-axis, y represents the rotation of the AGV about the y-axis, and z represents the rotation of the AGV about the z-axis.

Further, the specific composition of one batch in step S2 is described as follows:

a batch of data contains 1 odom _ yaw value, 1 imu _ yaw value, and 1 ekf _ combined _ yaw value, expressed as:

ODOM_YAWi={odom_yawt}

IMU_YAWi={imu_yawt}

EKF_COMBINED_YAWi={ekf_combined_yawt+Δt}

wherein t is the time when the first data in the group of data starts to be collected, Δ t is the collection interval, ODOM _ YAWiIs the odom _ YAW, IMU _ YAW in the ith lotiIs imu _ YAW, EKF _ COMMUNICED _ YAW in the ith batchiEkf _ combined _ yaw in the ith batch.

Further, in the LSTM model established in step S3, the input layer includes 2 neurons, and the former odom _ yaw and imu _ yaw are respectively input; the output layer only comprises 1 neuron and outputs lstm _ combined _ yaw; each hidden layer comprises 128 LSTM units, each LSTM unit comprises a forgetting gate, an input gate and an output gate besides the neuron, the forgetting gate determines the information quantity discarded from the neuron, the input gate determines the information quantity stored in the neuron, and the output gate determines the information quantity output from the neuron;

the key formula for the LSTM cell is as follows:

ft=σ(Wf·[ht-1,xt]+bf)

it=σ(Wi·[ht-1,xt]+bi)

Ct0=tanh(Wc·[ht-1,xt]+bc)

Ct=ft×Ct-1+it×Ct0

ot=σ(Wo·[ht-1,xt]+bo)

ht=ot×tanh(Ct)

wherein f ist,it,Ct0,Ct,ot,ht,ht-1Respectively representing a forgetting threshold, an input threshold, a neuron state at the previous moment, a current neuron state, an output threshold, the output of the current neuron and the output of the neuron at the previous moment; wf,Wi,Wc,WoRespectively representing weight matrixes of a forgetting gate, an input gate, a neuron and an output gate; bf,bi,bc,boRespectively representing the offset vectors of a forgetting gate, an input gate, a neuron and an output gate; tan h is an activation function; σ is a sigmod function.

Further, in step S4, the hidden layer activation function Relu formula is:

Relu(x)=max(0,x)

the loss function is the mean square error, which is calculated as:

Figure BDA0002250687040000051

wherein, yiIs an input training value, y'iIs a predicted value output by LSTM, n is the total batch number of data, 100 rounds of training are performed, each training comprises 72 batches, and the training adopts a Google open-source keras deep learning frameworkAnd after the training is finished, the model is saved to the specified path.

Further, the step S5 implements the following process:

the server deploys the stored LSTM model on line, waits for the client to send a request to the server, controls the AGV to walk in a circle, simultaneously processes data measured by the odom and the imu in real time, and sends the data to the server through the client after the odom _ yaw and the imu _ yaw are obtained; and the server side gives a response after receiving the client side request and returns an output result lstm _ combined _ yaw.

Further, the process of calculating the AGV motion trajectory in step S6 is as follows:

firstly, the displacement measured by the odom is calculated, and the calculation formula is as follows:

Δx=odomX1-odomX0

Δy=odomY1-odomY0

wherein, the odomX1、odomY1Indicating the measured position of the vehicle, odom, at the present momentX0、odomY0The method comprises the steps of representing the position of a trolley measured by the odom at the previous moment, and respectively representing the displacement of the AGV in the x direction, the displacement of the AGV in the y direction and the linear distance between two moments measured by the odom;

and combining the lstm _ combined _ yaw to obtain the position of the AGV in the coordinate system after fusion, wherein the calculation method comprises the following steps:

ΔX=Δs×cos(lstm_combined_yaw)

ΔY=Δs×sin(lstm_combined_yaw)

lstm_combined_x=lstm_combined_x+ΔX

lstm_combined_y=lstm_combined_y+ΔY

Δ X and Δ Y respectively represent displacement amounts of the AGV in the X direction and the Y direction when the AGV moves at an angle of lstm _ combined _ yaw, lstm _ combined _ X and lstm _ combined _ Y respectively represent X and Y coordinates of the AGV in a coordinate system after fusion, and the lstm _ combined _ X and the lstm _ combined _ Y are initialized to 0 when the AGV is started.

Compared with the prior art, the invention has the following advantages and effects:

(1) compared with a Kalman filtering fusion method, the method does not need to determine the statistical characteristic of noise in advance, so that the time cost for data fusion is saved;

(2) compared with a Kalman filtering fusion method, the method does not require time synchronization on the input data of the sensor when processing the sensor data, thereby simplifying the calculation steps;

(3) compared with a Kalman filtering fusion method, the method is not influenced by the external environment and has better robustness;

(4) compared with the traditional neural network data fusion method, the method uses the LSTM model, is beneficial to utilizing the time sequence of the sensor data and improving the data fusion precision;

drawings

FIG. 1 is a flowchart of an indoor autonomous navigation AGV motion trajectory fusion method based on an LSTM model disclosed in the present invention;

FIG. 2 is a schematic diagram of the structure of the LSTM model

FIG. 3 is a schematic diagram illustrating an AGV walking trajectory during training set acquisition according to an embodiment of the present invention;

FIG. 4 is a schematic diagram of the structure of an LSTM cell;

FIG. 5 is a comparison of AGV trajectories before and after the fusion algorithm used in embodiments of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于自动驾驶场景的多传感器融合定位方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!