Underwater robot positioning method under beacon track approaching condition

文档序号:1427884 发布日期:2020-03-17 浏览:14次 中文

阅读说明:本技术 一种接近信标轨迹下的水下机器人定位方法 (Underwater robot positioning method under beacon track approaching condition ) 是由 冀大雄 方文巍 于 2019-11-04 设计创作,主要内容包括:本发明涉及水下机器人定位技术,旨在提供一种接近信标轨迹下的水下机器人定位方法。包括:在保持水下机器人匀速率运动且速率大小已知的情况下,按照设定测量周期计算得到当前位移方向与当前观测距离方向之间的夹角;然后通过强化学习和训练,获得艏向角的调整策略并用于调整艏向角,使水下机器人朝向接近信标的方向运动;在接近信标的过程中,利用扩展卡尔曼滤波位置估计方程计算水下机器人的位置以实现定位。本发明在初始估计误差较大的情况下,相比于圆形轨迹,收敛速度更快。用强化学习的方法来调整艏向角,使水下机器人做接近信标的运动。避免了复杂的调整角度规则的制定。不需要准确的初始位置估计值,运算量小,计算简单,定位稳定可靠。(The invention relates to the underwater robot positioning technology, and aims to provide an underwater robot positioning method close to a beacon track. The method comprises the following steps: under the condition that the underwater robot is kept moving at a uniform speed and the speed is known, calculating according to a set measurement period to obtain an included angle between the current displacement direction and the current observation distance direction; then, obtaining an adjustment strategy of the heading angle through reinforcement learning and training, and adjusting the heading angle to enable the underwater robot to move towards the direction close to the beacon; and in the process of approaching the beacon, calculating the position of the underwater robot by using an extended Kalman filtering position estimation equation to realize positioning. Under the condition of larger initial estimation error, the method has higher convergence speed compared with a circular track. And adjusting the heading angle by using a reinforcement learning method to enable the underwater robot to move close to the beacon. The complex formulation of the angle regulation rule is avoided. The method does not need an accurate initial position estimation value, has small calculation amount, simple calculation and stable and reliable positioning.)

1. A method for positioning an underwater robot under a track close to a beacon is characterized by comprising the following steps:

(1) under the condition that the uniform velocity motion of the underwater robot is kept and the velocity is known, according to a set measurement period, measuring the distance value between the underwater robot and a beacon by using a single sound beacon, measuring a heading angle by using a compass of the underwater robot, and calculating an included angle between the current displacement direction and the current observation distance direction by using the displacement distance at the previous moment, the observation distance at the previous moment and the observation distance at the current moment through the cosine theorem of the known length of three edges; then, obtaining an adjustment strategy of the heading angle through reinforcement learning and training;

(2) adjusting a heading angle by utilizing a strategy obtained by reinforcement learning training to enable the underwater robot to move towards a direction close to the beacon;

(3) and in the process of approaching the beacon, calculating the position of the underwater robot by using an extended Kalman filtering position estimation equation to realize positioning.

2. The method according to claim 1, characterized in that said step (1) comprises in particular:

firstly, an 8-row and 16-column R table is established, the table content is corresponding to the average reward value in each case and represents the benefit degree of the approach beacon when the corresponding action is selected; wherein the content of the first and second substances,

the row names include the following eight cases: case 1: the observation distance is increased, the variation is larger than half of the displacement distance, and the included angle between the displacement direction and the observation distance direction is increased; case 2: the observation distance is increased, the variation is less than or equal to half of the displacement distance, and the included angle between the displacement direction and the observation distance direction is increased; case 3: the observation distance is reduced, the variation is smaller than half of the displacement distance, and the included angle between the displacement distance and the observation distance direction is increased; case 4: the observation distance is reduced, the variation is larger than or equal to half of the displacement distance, and the included angle between the displacement distance and the observation distance direction is increased; case 5: the observation distance is increased, the variation is larger than half of the displacement distance, and the included angle between the displacement direction and the observation distance direction is reduced; case 6: the observation distance is increased, the variation is less than or equal to half of the displacement distance, and the included angle between the displacement direction and the observation distance direction is reduced; case 7: the observation distance is reduced, the variation is less than half of the displacement distance, and the included angle between the displacement distance and the observation distance direction is reduced; case 8: the observation distance is reduced, the variation is larger than or equal to half of the displacement distance, and the included angle between the displacement distance and the observation distance direction is reduced;

the following names include the following sixteen adjustment actions: clockwise rotation of 15 °, 30 °, 45 °, 60 °, 75 °, 90 °, 105 °, 120 °, and counterclockwise rotation of 15 °, 30 °, 45 °, 60 °, 75 °, 90 °, 105 °, 120 °;

secondly, taking the R table subjected to reinforcement learning training as an adjustment strategy of the heading angle, wherein the specific training process comprises the following steps:

(1.1) initializing an R table, and determining the training times;

(1.2) after the initial position of the underwater robot is set, keeping the underwater robot moving at a uniform speed; in the first moment, the underwater robot moves towards any direction;

(1.3) measuring an observation distance r (t) at the current moment, an observation distance r (t-1) at the previous moment, a displacement distance d (t-1) at the previous moment according to a set time interval, calculating an included angle α between the displacement direction and the current observation distance direction by using a cosine theorem under the known trilateral length, keeping straight going if r (t) is less than r (t-1) and the included angle α is less than 45 degrees, or rotating the running direction of the underwater robot anticlockwise by 5 degrees and then straight going, and measuring to obtain the observation distance, the displacement and the included angle data at the next moment;

(1.4) calculating a difference value r (t +1) -r (t) of the observation distances at the last two times and a difference value α 2- α 1 of included angles at the two times according to the information of the observation distances and the displacement distances at the last three times, and classifying after subdividing;

(1.5) selecting sixteen well-designed actions according to an epsilon-greedy exploration strategy; after the action is executed, obtaining a corresponding reward value Re according to the change condition of the observation distance; re +1 when the observation distance is decreased, and-1 when the observation distance is unchanged or increased; and updating the R table; the update formula is:

R(s,a)=(R(s,a)×(N(s,a)-1)+Re)/N(s,a)

s is one of cases s 1-s 8, a is actions a 1-a 16, specifically the cases s categorized before the prize value Re is obtained, and the action a performed;

n (S, a) is one of the N (S, a) tables in the training process, representing the number of times action a was performed in case S; n (S, A) is a table of 8 × 16; the update rule is N (s, a) ═ N (s, a) + 1;

r (S, a) is one item in the R (S, A) table in the training process, which represents the average value of the reward values obtained by executing the action a in the state of the case S, and R (S, A) is an 8x16 table;

(1.6) when the observation distance is less than D; after the training of the round is finished, returning to the step (1.3); and when the training reaches the set times, finishing the training, and keeping a training result R table.

3. The method of claim 1, wherein step (2) comprises:

(2.1) measuring an observation distance r (t +1) at the current moment, an observation distance r (t) at the previous moment, a displacement distance d (t) at the previous moment and a cosine theorem under the known length of three sides, calculating an included angle α 2 between the displacement direction and the current observation distance direction, if r (t +1) < r (t) and the included angle α 2 is less than 45 degrees, keeping straight going, and if not, rotating the running direction of the underwater robot anticlockwise by 5 degrees, then straight going, and measuring to obtain the observation distance, the displacement and the included angle data at the next moment;

(2.2) after the heading angle is adjusted, continuing to move straight, and measuring and calculating again; repeating the above processes until the distance from the underwater robot to the beacon is less than the set distance value D; then randomly moving around the beacon direction for a set time T, and positioning is finished.

4. The bit method of claim 1, wherein step (3) comprises:

(1) setting an extended Kalman filter position estimation equation as follows:

Figure FDA0002258704160000021

P(k+1|k)=P(k|k)+Q

K(k+1)=P(k+1|k)·HT(k+1)·[H(k+1)·P(k+1|k)·HT(k+1)+R]-1

P(k+1|k+1)=[I-K(k+1)·H(k+1)]·P(k+1|k)

in the above formula, k denotes time k, and x (k) is [ x [ ]kyk]TThe superscript "T" denotes the transpose x of the vector or matrixk,ykThe coordinates of the underwater robot in the east direction and the north direction are respectively the coordinates of the underwater robot at the moment k by taking the beacon as the origin; the character is provided with a symbol of 'Lambda' to represent the predicted value or the estimated value of the state quantity;

in the equation, the ratio of the total of the components,

Figure FDA0002258704160000032

U(k)=[v·sinθkv·cosθk]T,θkthe heading angle of the underwater robot at the moment k is shown, and v is the navigational speed of the underwater robot;

i is the covariance matrix of the error at time k for the identity matrix P (k | k); p (k +1| k) is the error covariance matrix for the prediction at time k + 1; p (k +1| k +1) is the covariance matrix after the measurement update;

v (k +1) is process noise at the moment k +1, W (k +1) is observation noise at the moment k +1, zero mean Gaussian distribution is met, Q is the variance of the process noise, and R is the variance of the observation noise;

z (k +1) ═ h (X (k +1)) + W (k +1), which is the distance value between the noisy underwater vehicle and the beacon observed at time k + 1;

namely the distance from the underwater robot to the beacon at the moment k + 1;

Figure FDA0002258704160000036

h (k +1) is

Figure FDA0002258704160000037

Technical Field

The invention relates to the underwater robot positioning technology, in particular to an underwater robot positioning method under a beacon track.

Background

After an underwater robot performs a task underwater for a period of time, the accumulated error increases with time. At this time, the device generally needs to float out of the water surface and receive GPS signals for relocation. For underwater robots working in deep water, a large amount of energy is consumed when the underwater robots float out of the water, and the concealment is not good.

The single-beacon ranging and positioning technology is simple to install and low in cost, and becomes a new development direction of underwater positioning technology in recent years. However, the position of the underwater robot cannot be completely determined by single ranging information, so that a certain track and a corresponding algorithm need to be designed to realize positioning. Circular trajectories and extended kalman filtering algorithms are common maneuverable trajectories and filtering algorithms. The extended kalman filter is a nonlinear filter, which has a high requirement on the initial estimate, and when the error of the filtered initial estimate is large, the convergence speed is slow, even divergence is often caused. When the initial estimation error is large, the circular track is adopted, and the convergence speed is low.

Disclosure of Invention

The invention aims to solve the technical problem of overcoming the defects in the prior art and provides a method for positioning an underwater robot under a beacon track. The method is characterized in that under a single-beacon ranging system, an underwater robot moves towards a beacon by adjusting a heading angle, and meanwhile, position estimation is carried out in a track by using extended Kalman filtering to realize positioning.

In order to solve the technical problem, the solution of the invention is as follows:

the underwater robot positioning method under the approaching beacon track comprises the following steps:

(1) under the condition that the uniform velocity motion of the underwater robot is kept and the velocity is known, according to a set measurement period, measuring the distance value between the underwater robot and a beacon by using a single sound beacon, measuring a heading angle by using a compass of the underwater robot, and calculating an included angle between the current displacement direction and the current observation distance direction by using the displacement distance at the previous moment, the observation distance at the previous moment and the observation distance at the current moment through the cosine theorem of the known length of three edges; then obtaining an adjustment strategy of the heading angle through reinforcement learning and training;

(uniform motion in the physical definition means constant speed and direction, and uniform speed in the invention means constant speed but direction is changed by adjusting the angle of the heading angle.)

(the observation distance is the distance value between the underwater robot and the beacon measured by the beacon, and the observation distance is the measurement distance, and because the underwater sound distance measurement has noise, an error exists between the observation distance and the actual distance.)

(2) Adjusting a heading angle by utilizing a strategy obtained by reinforcement learning training to enable the underwater robot to move towards a direction close to the beacon;

(3) and in the process of approaching the beacon, calculating the position of the underwater robot by using an extended Kalman filtering position estimation equation to realize positioning.

In the present invention, the step (1) specifically includes:

firstly, an 8-row and 16-column R table is established, the table content is corresponding to the average reward value under each condition and represents the favorable degree of approaching beacons when corresponding actions are selected; wherein the content of the first and second substances,

the row names include the following eight cases: case 1: the observation distance is increased, the variation is larger than half of the displacement distance, and the included angle between the displacement direction and the observation distance direction is increased; case 2: the observation distance is increased, the variation is less than or equal to half of the displacement distance, and the included angle between the displacement direction and the observation distance direction is increased; case 3: the observation distance becomes smaller, the variation is smaller than half of the displacement distance, and the included angle between the displacement distance and the observation distance direction becomes larger; case 4: the observation distance is reduced, the variation is more than or equal to half of the displacement distance, and the included angle between the displacement distance and the observation distance direction is increased; case 5: the observation distance is increased, the variation is larger than half of the displacement distance, and the included angle between the displacement direction and the observation distance direction is reduced; case 6: the observation distance is increased, the variation is less than or equal to half of the displacement distance, and the included angle between the displacement direction and the observation distance direction is reduced; case 7: the observation distance is reduced, the variation is less than half of the displacement distance, and the included angle between the displacement distance and the observation distance direction is reduced; case 8: the observation distance is reduced, the variation is larger than or equal to half of the displacement distance, and the included angle between the displacement distance and the observation distance direction is reduced;

the following names include the following sixteen adjustment actions: clockwise rotation of 15 °, 30 °, 45 °, 60 °, 75 °, 90 °, 105 °, 120 °, and counterclockwise rotation of 15 °, 30 °, 45 °, 60 °, 75 °, 90 °, 105 °, 120 °;

secondly, taking the R table subjected to reinforcement learning training as an adjustment strategy of the heading angle, wherein the specific training process comprises the following steps:

(1.1) initializing an R table, and determining the training times;

(1.2) after the initial position of the underwater robot is set, keeping the underwater robot moving at a uniform speed; in the first moment, the underwater robot moves towards any direction;

(1.3) measuring an observation distance r (t) at the current moment, an observation distance r (t-1) at the previous moment, a displacement distance d (t-1) at the previous moment according to a set time interval, calculating an included angle α between the displacement direction and the current observation distance direction by using a cosine theorem under the known trilateral length, keeping straight going if r (t) is less than r (t-1) and the included angle α is less than 45 degrees, or rotating the running direction of the underwater robot anticlockwise by 5 degrees and then straight going, and measuring to obtain the observation distance, displacement and included angle data at the next moment;

(1.4) calculating a difference value r (t +1) -r (t) of the observation distances at the last two times and a difference value α 2- α 1 of included angles at the two times according to the information of the observation distances and the displacement distances at the last three times, and classifying after subdividing;

(1.5) selecting sixteen well-designed actions according to an epsilon-greedy exploration strategy; after the action is executed, obtaining a corresponding reward value Re according to the change condition of the observation distance; re +1 when the observation distance is decreased, and-1 when the observation distance is unchanged or increased; and updating the R table; the update formula is:

R(s,a)=(R(s,a)×(N(s,a)-1)+Re)/N(s,a)

s is one of cases s 1-s 8, a is actions a 1-a 16, specifically the cases s categorized before the prize value Re is obtained, and the action a performed;

n (S, a) is one of the N (S, a) tables in the training process, representing the number of times action a was performed in case S; n (S, A) is a table of 8 × 16; the update rule is N (s, a) ═ N (s, a) + 1;

r (S, a) is one item in the R (S, A) table in the training process, which represents the average value of the reward values obtained by executing the action a under the condition S, and R (S, A) is an 8x16 table;

(1.6) when the observation distance is less than D; after the training of the round is finished, returning to the step (1.3); and finishing the training when the training reaches the set number, and keeping the training result R table.

(the exploration strategy of epsilon-greedy is the prior art, and is an exploration strategy commonly used for reinforcement learning, an action (including an action for maximizing the R value) is randomly selected according to the probability of epsilon, and the action with the maximum R value is selected according to the probability of 1-epsilon, so that the action with the maximum R value is more selected and more fully trained in the training process compared with the random strategy.)

In the present invention, the step (2) includes:

(2.1) measuring an observation distance r (t +1) at the current moment, an observation distance r (t) at the previous moment, a displacement distance d (t) at the previous moment and a cosine theorem under the known length of three sides, calculating an included angle α 2 between the displacement direction and the current observation distance direction, if r (t +1) < r (t) and the included angle α 2 is less than 45 degrees, keeping the underwater robot to move straight, if not, rotating the advancing direction of the underwater robot anticlockwise by 5 degrees, then moving straight, and measuring to obtain the observation distance, the displacement and the included angle data at the next moment;

(2.2) after the heading angle is adjusted, continuing to move straight, and measuring and calculating again; repeating the above processes until the distance from the underwater robot to the beacon is less than the set distance value D; then randomly moving around the beacon direction for a set time T and the positioning ends.

In the present invention, the step (3) includes:

(1) setting an extended Kalman filter position estimation equation as follows:

Figure BDA0002258704170000031

P(k+1|k)=P(k|k)+Q

K(k+1)=P(k+1|k)·HT(k+1)·[H(k+1)·P(k+1|k)·HT(k+1)+R]-1

Figure BDA0002258704170000041

P(k+1|k+1)=[I-K(k+1)·H(k+1)]·P(k+1|k)

in the above formula, k denotes time k, and x (k) is [ x [ ]kyk]TThe superscript "T" denotes the transpose x of the vector or matrixkYk is an east coordinate and a north coordinate of the underwater robot with the beacon as an origin at the moment k respectively; the character is provided with a symbol of 'Lambda' to represent the predicted value or the estimated value of the state quantity;

in the equation, the ratio of the total of the components,

Figure BDA0002258704170000042

is the state estimate at time k,

Figure BDA0002258704170000043

is the predicted value of the state at the time k +1,

Figure BDA0002258704170000044

is a state estimation value at the time k +1 corrected according to the observation value;

U(k)=[v·sinθkv·cosθk]T,θkthe heading angle of the underwater robot at the moment k is shown, and v is the navigational speed of the underwater robot;

i is the covariance matrix of the error at time k for the identity matrix P (k | k); p (k +1| k) is the error covariance matrix for the prediction at time k + 1; p (k +1| k +1) is the covariance matrix after the measurement update;

v (k +1) is process noise at the moment k +1, W (k +1) is observation noise at the moment k +1, zero mean Gaussian distribution is met, Q is the variance of the process noise, and R is the variance of the observation noise;

z (k +1) ═ h (X (k +1)) + W (k +1), which is the distance value between the noisy underwater machine observed at time k +1 and the beacon;

Figure BDA0002258704170000045

namely the distance from the underwater robot to the beacon at the moment k + 1;

namely the k +1 moment, the distance calculated by the state prediction quantity;

h (k +1) is

Figure BDA0002258704170000047

The Jacobian matrix is obtained after linearization by Taylor expansion and the mode of retaining the first-order term,

Figure BDA0002258704170000048

compared with the prior art, the invention has the beneficial effects that:

1. under the condition of larger initial estimation error, the method has higher convergence speed compared with a circular track.

2. The invention adjusts the heading angle by a reinforcement learning method, so that the underwater robot can move close to the beacon. The complex formulation of the angle regulation rule is avoided.

3. The method does not need an accurate initial position estimation value, has small calculation amount, simple calculation and stable and reliable positioning.

Drawings

Fig. 1 is a schematic diagram of beacon ranging and angle calculation in the present invention.

Fig. 2 is a schematic diagram of measurements for eight case classification.

FIG. 3 is a flow chart of R-table training.

Detailed Description

The invention uses single beacon ranging navigation, under the condition that the initial position of the underwater robot is unknown, the positioning is realized by enabling the underwater robot to move close to the beacon and using extended Kalman filtering to carry out position estimation. The extended Kalman filtering under the beacon track is close to has a high convergence speed, and is insensitive to the initial estimation error. Under the condition that the initial position is unknown, a certain heading angle adjusting strategy is needed for making the movement of the approach beacon. The invention obtains the result of the heading angle adjusting strategy through the training of a reinforcement learning method.

The control strategy of the invention is stored in the form of R table, and the angle which is more suitable under a certain condition is selected for adjustment through the query table.

An example of the structure of the R table is as follows:

R(S,A) a1 a2 a3 ... a16
s1 R(s1,a1) R(s1,a2) R(s1,a3) ... R(s1,a16)
s2 R(s2,a1) R(s2,a2) R(s2,a3) ... R(s2,a16)
s3 R(s3,a1) R(s3,a2) R(s3,a3) ... R(s3,a16)
s4 R(s4,a1) R(s4,a2) R(s4,a3) ... R(s4,a16)
s5 R(s5,a1) R(s5,a2) R(s5,a3) ... R(s5,a16)
s6 R(s6,a1) R(s6,a2) R(s6,a3) ... R(s6,a16)
s7 R(s7,a1) R(s7,a2) R(s7,a3) ... R(s7,a16)
s8 R(s8,a1) R(s8,a2) R(s8,a3) ... R(s8,a16)

in the table, S is a case set S ═ S1, S2, S3, s4... S8 >, and a is an action set a ═ a1, a2, a3... a16 >.

An example of the training process for the R-table is as follows:

Figure BDA0002258704170000051

for example, assuming that a particular resulting R-value training table is as shown above (only a portion of the table is shown for clarity), the row is labeled with the largest value.

Then, after the training is completed, the action is selected according to the table, and the action corresponding to the largest value is selected.

When the case is s1, the action selected is a8, and when the case is s2, the action is a 1.

If the training is performed, when the epsilon-greedy random exploration strategy is used, the probability that the action corresponding to the marked grid is selected is higher than that of other actions, and the N (S, A) table (only one part of the table is displayed for clearly showing the content) is also needed for updating the R (S, A) table.

Figure RE-GDA0002362492120000061

For example, in case s1, if the observation distance decreases after performing act a1, Re becomes + 1.

Update N (s1, a1) ═ N (s1, a1) +1 ═ 8

Updated R (s1, a1) ═ (R (s1, a1) × (N (s1, a1) -1) + Re)/N (s1, a1) (-0.22 × (8-1) +1)/8 ═ 0.0675.

As shown in fig. 2, there are only 7 measured quantities (i.e., 3+2+2) in the graph, namely, the observed distance values (no direction) at three times, two displacement distances between the three times, and two displacement directions between the three times, 4x2, which is divided according to the distance change (the size is changed from large to small, the size between the large size and the displacement/2, and the size of the small size and the displacement/2, can be divided into 4), and the angle (the size is changed from large to small) between the displacement and the distance, is 8, and the angle α is measured by the cosine law of trilateral ranging.

In fig. 2, the difference tenzhen2 between the two last observation distances tenzhen1 and the two included angles is two flag quantities for situation classification, α is the angle between the previous time displacement distance d (t-1) and the observation distance r (t-1), and is obtained by calculation using the cosine theorem, α is the angle between the displacement distance d (t) and the observation distance r (t) after 5 ° rotation, and d is the displacement distance value, the underwater robot does uniform velocity movement, and d (t) is d (t-1), tezhen1 is r (t +1) -r (t), tezhen2 is α - α, and the situation classification is based on whether tezhen1 is greater than or equal to d/2, less than or equal to 0, and is more preferably based on eight cases, and is further described as a classification based on the eight cases.

Situation classification table

Set of conditions S Basis of classification
s1 r(t+1)>r(t)+d(t)/2,α2≥α1
s2 r(t)<r(t+1)≤r(t)+d(t)/2,α2≥α1
s3 r(t)-d(t)/2<r(t+1)≤r(t),α2≥α1
s4 r(t+1)≤r(t)-d(t)/2,α2≥α1
s5 r(t+1)>r(t)+d(t)/2,α2<α1
s6 r(t)<r(t+1)≤r(t)+d(t)/2,α2<α1
s7 r(t)-d(t)/2<r(t+1)≤r(t),α2<α1
s8 r(t+1)≤r(t)-d(t)/2,α2<α1

A1, a2.. a8 in the action set a are respectively rotated 15 °, 30 °.120 ° clockwise. a9, a10.. a16 is a 15 °, 30 °, 120 ° counterclockwise rotation, respectively.

The heading angle adjustment is adjusted at time t +1, the adjustment angle being one of a1, a2. And in the training process, the action selection strategy is an epsilon-greedy strategy. ε is a number between 0 and 1, the greater the probability that a random action will be taken, here set to 0.9, so that each action can be fully explored during the training process. Before adjustment, a random number between 0 and 1 is generated, and if the random number is larger than epsilon, one action in the A set is selected randomly. If the value is less than epsilon, the row name corresponding to the table is found according to the table and the classification of the condition, the lattice with the maximum median value in the row is found, and the action corresponding to the column name of the lattice is selected.

After the action is selected, the action of adjusting the angle is executed and the operation is performed in a straight line. After the straight-ahead movement is measured, the distance r (t +2) from the underwater robot to the beacon is calculated, and the reward value is used as a signal for judging whether the action is good or bad in the situation. The purpose is to make the underwater robot do the motion of approaching the beacon. Thus, when the distance becomes smaller, the prize value is +1, and when the distance becomes larger, the prize value is-1. Namely when r (t +2) is less than or equal to r (t +1), the reward value Re is equal to + 1; when r (t +2) ≧ r (t +1), the reward value Re is-1. If the latter time is shorter than the former time, the prize value is positive. The aim is that for approaching beacons, the distance reduction is good, giving a positive prize value. The reward value is used to update the R table, which causes the heading angle adjustment strategy to change as well. The updated formula is that R (s, a) — (R (s, a) × (N (s, a) -1) + Re)/N (s, a). s is one of the cases s1 to s8, a is the action al to a16, specifically the case s classified before the prize value Re is obtained, and the action a performed.

N (S, a) is an entry in the N (S, a) table that represents the number of times action a was performed in case S during the training process. N (S, a) is an 8x16 table. The update rule is N (s, a) ═ N (s, a) + 1.

R (S, a) is an entry in the R (S, a) table that represents the average of the reward values obtained by performing action a in case S. R (S, A) is a table of 8x 16.

The invention adopts an epsilon-greedy random strategy on the action selection strategy, and each action can be fully selected in the training process, so that the value of R (s, a) tends to the mathematical expected value of the value.

The training process is illustrated in the flow chart of fig. 3.

Step 1: initialize the R table as an empty table of 8x 16. Setting training times

Step 2: and setting the initial position of the underwater robot.

And measuring an observation distance r (t) at each time t, combining the displacement d (t-1) at the previous time and the observation distance r (t-1) at the previous time, and judging whether to adjust the angle or not as shown in figure 1. if r (t) < r (t-1) and the included angle α is less than 45 degrees, keeping straight running.

And step 3: otherwise, the next observation distance and the displacement r (t +1) at the next moment are obtained by rotating 5 degrees counterclockwise and going straight.

And 4, step 4: as shown in fig. 2. Features 1 and 2 are calculated from the observed distances at three moments and two displacement information, and all the cases are classified into eight types by the features. And selecting an action according to an epsilon-greedy strategy for each situation. After the action is selected, the action is executed to go straight, and the distance r (t +2) at the next moment is obtained.

And 5: the distance becomes smaller, the reward value Re becomes +1, the distance becomes larger, and the reward value Re becomes-1. The prize value Re is obtained and the table is updated.

Step 6: when the observation distance is less than D. And (5) after the training of the round is finished, returning to the step 2. And finishing the training when the training reaches the set times. And keeping a training result R table.

Extended kalman filter position estimation equation:

Figure BDA0002258704170000081

P(k+1|k)=P(k|k)+Q

K(k+1)=P(k+1|k)·HT(k+1)·[H(k+1)·P(k+1|k)·HT(k+1)+R]-1

Figure BDA0002258704170000082

P(k+1|k+1)=[I-K(k+1)·H(k+1)]·P(k+1|k)

in the above formula, k denotes time k, and x (k) is [ x [ ]kyk]TThe superscript "T" denotes the transpose x of the vector or matrixkYk is an east coordinate and a north coordinate of the underwater robot with the beacon as an origin at the moment k respectively; the letters are signed with a symbol of ^ a' to indicate the predicted or estimated value of the state quantity. In the equation

Figure BDA0002258704170000083

Is the state estimate at time k,

Figure BDA0002258704170000084

is the predicted value of the state at the time k +1,

Figure BDA0002258704170000085

is a state estimation value at the time k +1 corrected based on the observation value,

U(k)=[v·sinθkv·cosθk]Tθkthe heading angle of the underwater robot at the moment k is shown, and v is the navigational speed of the underwater robot;

i is the covariance matrix of the error at time k for the identity matrix P (k | k); p (k +1| k) is the error covariance matrix for the prediction at time k + 1; p (k +1| k +1) is the covariance matrix after the measurement update.

V (k +1) is process noise at the moment k +1, W (k +1) is observation noise at the moment k +1, zero mean Gaussian distribution is met, Q is the variance of the process noise, and R is the variance of the observation noise;

z (k +1) ═ h (X (k +1)) + W (k +1), which is the distance value between the noisy underwater machine observed at time k +1 and the beacon;

Figure BDA0002258704170000091

i.e. the distance from the underwater robot to the beacon at time k + 1.

I.e., time k +1, the distance calculated by the state predictor.

H (k +1) isThe Jacobian matrix is obtained after linearization by Taylor expansion and the mode of retaining the first-order term,

Figure BDA0002258704170000094

according to the R table, for case si(i ═ 1, 2, 3.., 8), compare R(s)i,a1),R(si,a2),R(si, a3)...R(siAnd a16), selecting the action corresponding to the maximum R value as the angle required to be adjusted, and enabling the underwater robot to move towards the beacon. And in the process of the underwater robot moving to the beacon, performing position estimation by using extended Kalman filtering.

The underwater robot moves at a uniform speed, and the observation distance of the single beacon is measured at each moment. From the velocity information, displacement information between two moments can be obtained. At the start time, the movement is in an arbitrary direction. A triangle is formed by the observed distances r (t-1) to the beacon, r (t) measured at two moments and the displacement distance d (t-1) between the two moments. And performing a judgment once, and if r (t) < r (t-1) and the included angle between the side d (t-1) and the side r (t-1) in the triangle is less than 45 degrees, keeping the straight movement. Otherwise, rotating 5 degrees anticlockwise at the current moment and moving straight; entering the next time, measuring the observation distance r (t +1) from the next time to the single beacon and the displacement distance d (t) between the next time and the current time.

Two triangles can be composed of the distance to the beacon at three times and two pieces of displacement distance information between the three times. The two triangles are divided into eight cases according to the change of distance and the change of included angle. For each case, an adjusted angle is selected according to the R table. The R table is the result of reinforcement learning training. After the angle is adjusted, the distance r (t +2) from the beacon at the next moment is obtained by going straight.

And repeating the steps until the distance from the underwater robot to the beacon is less than D. And then moves in the vicinity of the beacon for a period of time T, and the positioning is finished.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于卷积神经网络和接收信号强度的DOA估计方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类