Pan-tilt control method based on machine vision and applied to anti-low-slow small target

文档序号:1903958 发布日期:2021-11-30 浏览:9次 中文

阅读说明:本技术 应用于反低慢小目标的基于机器视觉的云台控制方法 (Pan-tilt control method based on machine vision and applied to anti-low-slow small target ) 是由 林德福 李帆 王辉 宋韬 吴则良 于 2020-11-16 设计创作,主要内容包括:本发明公开了一种应用于反低慢小目标的基于机器视觉的云台控制方法,该方法中通过设置无人机及云台的自动控制过程,使得无人机能够携带云台在预定范围内巡航,从而及时发现入侵的低慢小目标,且所述云台在发现目标后按照设定的特殊角速度旋转,降低目标脱离视场域的可能性,尽量让目标处于视场域的中心位置,从而增强对目标的跟踪效果。(The invention discloses a cloud deck control method based on machine vision and applied to anti-low-slow small targets.)

1. A cloud deck control method based on machine vision and applied to anti-low-slow small targets is characterized by comprising the following steps:

step 1, an unmanned aerial vehicle carrying a cloud deck hovers when reaching a preset position, controls the cloud deck and a camera on the cloud deck to search for a target, and enters a search mode;

step 2, in the search mode, a camera reads a shot image in real time, judges whether the image contains a target, and judges whether the image enters a tracking mode when the image contains the target;

step 3, generating a control command to adjust the rotation angular velocity of the pan-tilt according to the pixel deviation of the target in the image in real time after entering the tracking mode,

in the tracking mode, if the target is lost in a short time, the tracking mode is continuously kept, if the target is lost for a long time, the tracking mode is stopped, the cradle head is controlled to recover to the initial angle state, and the unmanned aerial vehicle is controlled to recover to the preset position.

2. The machine vision-based pan-tilt control method applied to anti-low-slow small targets according to claim 1,

in step 2, when the target depth in the image is less than the set value and the continuous multiframe images all contain the target, the tracking mode is entered.

3. The machine vision-based pan-tilt control method applied to anti-low-slow small targets according to claim 2,

the set value is 30 meters, and the multi-frame image is 5 frames or more than 5 frames.

4. The machine vision-based pan-tilt control method applied to anti-low-slow small targets according to claim 1,

in step 3, after entering the tracking mode, the rotational angular velocity of the pan/tilt head is first adjusted at a small speed, and after a certain time, the rotational angular velocity of the pan/tilt head is adjusted at a large speed.

5. The machine vision-based pan-tilt control method applied to anti-low-slow small targets according to claim 4,

in step 3, obtaining a desired rotational angular velocity of the pan/tilt head after entering the tracking mode according to the following formula (one), and controlling the pan/tilt head to rotate according to the desired rotational angular velocity;

wherein the content of the first and second substances,indicating the desired rotational angular velocity, k, of the headpmaxRepresenting the maximum value of the pixel deviation term weight, kpminRepresenting the minimum value of the pixel deviation term weight, t representing time, err representing the pixel deviation, kdmaxRepresenting the maximum value of the pixel deviation change rate weight, kdminRepresenting the minimum value of the pixel deviation rate of change weight,expression elementRate of change of deviation.

6. The machine vision-based pan-tilt control method applied to anti-low-slow small targets according to claim 1,

in step 3, in the tracking mode, after the target is lost, the pan-tilt is continuously controlled by the control instruction corresponding to the previous frame of image lost by the target,

and if the target cannot be captured after the target is lost for 200ms, terminating the tracking mode and controlling the holder to recover to the initial angle state.

7. The machine vision-based pan-tilt control method applied to anti-low-slow small targets according to claim 1,

and after the target is lost for 1s, controlling the unmanned aerial vehicle to return to the preset position.

8. The machine vision-based pan-tilt control method applied to the anti-low-slow small target according to claim 1, characterized in that:

when the unmanned aerial vehicle enters a tracking mode, the state estimation of the target is obtained through real-time calculation, and the unmanned aerial vehicle is controlled to track or chase the target according to the state estimation.

Technical Field

The invention relates to a control method of a pan-tilt on an unmanned aerial vehicle, in particular to a pan-tilt control method based on machine vision and applied to a low-speed small target.

Background

"Low-slow small" target (LSST) refers to small aircraft and airborne objects that have all or part of the characteristics of low altitude, ultra-low altitude flight (flight height below 1000 m), flight speed less than 200km/h, and are not easily found by radar. The low-slow small aircraft has the flying height of below 1000 meters, the flying speed per hour of less than 200 kilometers and the radar reflection area of less than 2 square meters, and the low-slow small target is widely used and rapidly developed due to the wide range of the low-slow small target (including small and medium sized airplanes, helicopters, gliders, hot air balloons, unmanned aerial vehicles and other general aviation equipment and aviation sports equipment) and the development of science and technology.

The development of the 'slow and slow small' target improves the national economic development level, but the 'slow and slow small' event obviously rises in recent years by trying a double-edged sword, the significant threat of the development of the 'slow and slow small' target to important targets, key areas and important activities is increasingly highlighted, and once the double-edged sword is utilized by some people with bad minds, an unimaginable later result is generated. With the opening of the low-altitude airspace in China, the supervision and the prevention of the low-slow small target become problems to be solved urgently, and the accurate detection, interception, tracking and striking of the low-slow small target are very important and urgent.

The low-slow small target has the characteristics of difficult detection and difficult defense, and the existing interception modes of the low-slow small target mainly include soft killing and hard killing. The soft killing realizes the fighting capacity of weakening the 'low-slow small' target through an interference communication link, an interference navigation positioning system and interference detection equipment. Hard kills are through interventions in the form of sending helicopter blows, drone blows and destroying ground stations. Because of a series of advantages of strong battlefield sensing capability, high flexibility, low cost and the like of the unmanned aerial vehicle, striking a 'low-slow-small' target by the unmanned aerial vehicle becomes a very considerable measure with respect to cost-effectiveness ratio.

The precondition of all processing on low and slow small targets is that the target is identified and tracked in real time, and the tracked target is also identified by carrying a camera through a pan-tilt in the existing scheme.

For the above reasons, the present inventors have conducted intensive studies on the existing pan/tilt control method, and have a desire to design a pan/tilt control method based on machine vision, which can solve the above problems and is applied to a reverse-low-slow small target.

Disclosure of Invention

In order to overcome the problems, the inventor of the invention makes a keen study and designs a pan-tilt control method based on machine vision, which is applied to the anti-low-slow small targets, in the method, through setting an unmanned aerial vehicle and an automatic control process of the pan-tilt, the unmanned aerial vehicle can carry the pan-tilt to cruise within a preset range, so that the low-slow small targets which invade are found in time, the pan-tilt rotates according to a set special angular speed after finding the targets, the possibility that the targets are separated from a visual field domain is reduced, the targets are located at the central position of the visual field domain as much as possible, and the tracking effect of the targets is enhanced, thereby completing the invention.

Specifically, the invention aims to provide a machine vision-based pan-tilt control method applied to a reverse slow small target, which comprises the following steps:

step 1, an unmanned aerial vehicle carrying a cloud deck hovers when reaching a preset position, controls a cloud deck and a camera on the cloud deck to search for a target, and enters a search mode;

step 2, in the search mode, a camera reads a shot image in real time, judges whether the image contains a target, and judges whether the image enters a tracking mode when the image contains the target;

step 3, generating a control command to adjust the rotation angular velocity of the pan-tilt according to the pixel deviation of the target in the image in real time after entering the tracking mode,

in the tracking mode, if the target is lost in a short time, the tracking mode is continuously kept, and if the target is lost for a long time, the tracking mode is stopped, the cradle head is controlled to recover to an initial angle state, and the unmanned aerial vehicle is controlled to recover to a preset position.

In step 2, when the target depth in the image is less than the set value and the continuous multi-frame images all contain the target, the tracking mode is entered.

Wherein the set value is 30 meters, and the multi-frame image is 5 frames or more than 5 frames.

In step 3, after entering the tracking mode, the rotational angular velocity of the pan/tilt head is first adjusted at a small speed, and then adjusted at a large speed after a certain time.

In step 3, after entering the tracking mode, calculating a desired rotational angular velocity of the pan/tilt head according to the following formula (one), and controlling the pan/tilt head to rotate according to the desired rotational angular velocity;

wherein the content of the first and second substances,indicating the desired rotational angular velocity, k, of the headpmaxRepresenting the maximum value of the pixel deviation term weight, kpminRepresenting the minimum value of the pixel deviation term weight, t representing time, err representing the pixel deviation, kdmaxRepresenting the maximum value of the pixel deviation change rate weight, kdminRepresenting the minimum value of the pixel deviation rate of change weight,indicating the rate of change of pixel deviation.

Wherein, in the step 3, after the target is lost in the tracking mode, the control command corresponding to the previous frame of image is continuously lost through the target to control the pan-tilt,

and if the target cannot be captured after the target is lost for 200ms, terminating the tracking mode and controlling the holder to recover to the initial angle state.

And controlling the unmanned aerial vehicle to return to the preset position after the target is lost for 1 s.

When the unmanned aerial vehicle enters the tracking mode, the state estimation of the target is obtained through real-time calculation, and the unmanned aerial vehicle is controlled to track or chase the target according to the state estimation.

The invention has the advantages that:

(1) according to the cloud deck control method based on machine vision applied to the anti-low-slow small target, the cloud deck can be stably switched from a static state to a fast tracking state, and the cloud deck cannot lose the target due to motion blur;

(2) according to the pan-tilt control method based on machine vision, which is applied to the anti-low-slow small target, the required navigation information can be conveniently obtained when a task is executed;

(3) according to the cloud deck control method based on machine vision applied to the inverse low-speed small target, which is provided by the invention, the cloud deck can be always in a controllable state when some unexpected conditions occur by the control method with high robustness.

Drawings

FIG. 1 is a logic diagram of a control method of a pan-tilt-zoom based on machine vision applied to a small target with low speed and low speed according to a preferred embodiment of the invention;

FIG. 2 shows pixel normalized pixel deviations obtained in the example;

FIG. 3 shows a partial enlarged view of FIG. 2;

fig. 4 shows a comparison of an actual trajectory with an observed trajectory.

Detailed Description

The invention is explained in more detail below with reference to the figures and examples. The features and advantages of the present invention will become more apparent from the description.

The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.

According to the pan-tilt control method based on machine vision applied to the anti-low-slow small target provided by the invention, as shown in fig. 1, the method comprises the following steps:

step 1, an unmanned aerial vehicle carrying a cloud deck hovers when reaching a preset position, controls the cloud deck and a camera on the cloud deck to search for a target, and enters a search mode; the preset position can be a coordinate point filled into the unmanned aerial vehicle before the unmanned aerial vehicle takes off, and also can be a coordinate point transmitted to the unmanned aerial vehicle by the ground control station in real time.

Step 2, in the search mode, a camera reads a shot image in real time, judges whether the image contains a target, and judges whether the image enters a tracking mode when the image contains the target; in the search mode, the camera takes pictures in real time at a preset frequency, for example 25Hz, and searches for objects in each frame of image by means of image recognition, and the possible shapes of the objects are filled into the image recognition system by means of machine learning.

Step 3, generating a control command to adjust the rotation angular velocity of the pan-tilt according to the pixel deviation of the target in the image in real time after entering the tracking mode,

in the tracking mode, if the target is lost in a short time, the tracking mode is continuously kept, and if the target is lost for a long time, the tracking mode is stopped, the cradle head is controlled to recover to an initial angle state, and the unmanned aerial vehicle is controlled to recover to a preset position. When the target is just lost, the target is considered to be possible to be re-identified in a short time, so that for the cloud platform, the control instruction of the target just before the target is lost can be kept to ensure that the movement direction of the cloud platform is consistent with the movement direction of the target to the maximum extent, and conditions are provided for re-identifying the target; when the target is lost for a long time and cannot be identified again, the target is treated as a complete loss in case the unmanned aerial vehicle is in an unknown danger and is ready for re-mission.

In a preferred embodiment, in step 2, when the depth of the target in the image is less than the set value and the target is included in the continuous multiframe images, the tracking mode is entered.

Preferably, the set value is 30 meters, and the plurality of frames of images are 5 frames or more than 5 frames of images. The condition for entering the tracking mode is set, so that the target can not be blurred due to too far distance after entering the tracking mode, and can not be frequently lost due to the fact that the target deviates from the center of the field of view.

In a preferred embodiment, in step 3, after entering the tracking mode, the rotational angular velocity of the pan/tilt head is first adjusted at a small speed, and after a certain time, the rotational angular velocity of the pan/tilt head is adjusted at a large speed. When the pan/tilt head starts moving, because the target is likely to be located at the edge position in the field of view, a large control instruction is given to the pan/tilt head at this time, which causes the pan/tilt head to move too fast, so that the image generates motion blur, and the target cannot be identified even though being located in the field of view, therefore, the control instruction given in the first stage is not suitable to be too large; after the pan-tilt tracking exceeds a certain time, if a small control instruction is still kept, when a target moves in front of the pan-tilt in a large movement mode, the pan-tilt can be lost due to insufficient maneuvering capability, and therefore the maneuvering capability of the pan-tilt needs to be increased at this stage, and the control instruction is increased.

Preferably, the expected rotational angular velocity of the pan/tilt head is calculated by the following formula (one) after entering the tracking mode, and the pan/tilt head is controlled to rotate according to the calculated rotational angular velocity;

wherein the content of the first and second substances,indicating the desired rotational angular velocity, k, of the headpmaxRepresenting the maximum value of the pixel deviation term weight, kpminRepresents the minimum value of the pixel deviation term weight, t represents the time, specifically the time after entering the tracking mode, i.e. the time is started after entering the tracking mode, err represents the pixel deviation, k represents the pixel deviationdmaxRepresenting the maximum value of the pixel deviation change rate weight, kdminRepresenting the minimum value of the pixel deviation rate of change weight,indicating the rate of change of element deviation, i.e. eThe derivative of rr.

In a preferred embodiment, in step 3, in the tracking mode, after the target is lost, the pan-tilt is continuously controlled through the control instruction corresponding to the previous frame of image where the target is lost, that is, the target can still be found within 200ms of the target loss;

and if the target cannot be captured after the target is lost for 200ms, terminating the tracking mode, and controlling the holder to recover to the initial angle state, namely that the target cannot be found back at the moment.

Through setting up this time condition, can improve unmanned aerial vehicle and the efficiency of patrolling of last camera to the at utmost, reduce the possibility that the region of patrolling has low little target infiltration slowly.

Preferably, after the target is lost for 1s, the unmanned aerial vehicle is controlled to return to the preset position, and the next cruising operation can be started after the unmanned aerial vehicle returns to the preset position.

Preferably, when the unmanned aerial vehicle enters the tracking mode, the state estimation of the target is obtained through real-time calculation, and the unmanned aerial vehicle is controlled to track or chase the target according to the state estimation. The drone may carry equipment to attack or capture objects, so the objects may also disappear from the field of view after being knocked down or captured.

In a preferred embodiment, when entering the tracking mode, the drone tracks or chases a found low-slow small target with the pan-tilt and the camera, and specific operations such as tracking, shooting or capturing can be selected in advance according to a set instruction. The unmanned aerial vehicle image recognition system extracts at least 4 characteristic points from the target in each frame of image, and calculates the state estimation of the target according to the characteristic points, wherein the state estimation comprises the position, the posture and the speed of the target. The characteristic point is the peculiar point on the target, is convenient for discern, can select to set for according to the kind and the appearance of target, like four motor positions of four rotor unmanned aerial vehicle. The method specifically comprises the following steps:

step A, obtaining a rotation matrix through the pixel coordinates of the target characteristic points,

step B, obtaining the attitude of the target through the rotation matrix,

step C, obtaining the acceleration of the target through the attitude of the target,

and D, acquiring the actual position and speed of the target through the acceleration of the target.

Preferably, in the step a, the rotation parameter of the target is obtained by the following formula (two):

wherein R represents a rotation matrix, i.e. for deriving from an orthogonal coordinate system OaXaYaZaTo the camera coordinate system OcXcYcZcA 3 × 3 rotation matrix for conversion, wherein 9 parameters in the rotation matrix are also called rotation parameters;

r' represents an arbitrary rotation matrix, the third column [ R ] thereof7 r8 r9]TEqual to the rotation axis Za and R' satisfies the orthogonal constraint of the rotation matrix;

rotating shaft Representing point Pi0Point of orientation Pj0Vector of (c), Pi0Pj0I represents a point Pi0Point of orientation Pj0Modulo of the vector of (a);

two points P can be solved by extracting the pixel coordinates of 4 characteristic points from the target in each frame imagei0And point Pj0To determine the rotation axis Za in equation (two), i.e. [ r ]7 r8 r9]T

rot (Z, alpha) represents that the rotation angle of the target around the Z axis is alpha;

c=cosα,s=sinα;

r1to r9Each representing each element of an arbitrary 3 x 3 rotation matrix R', a third column R7 r8 r9]TEqual to the rotation axis Za.

In the step B, the posture of the target is obtained by the following formula (three):

wherein, theta1Represents the pitch angle of the target in the range of

θ2Represents the pitch angle of the target when theta1When the angle of pitch of the unmanned aerial vehicle is larger than 90 degrees or smaller than-90 degrees2It is shown that,

ψ1representing pitch angle theta1The yaw angle of the target is obtained by corresponding solution,

ψ2representing pitch angle theta2The yaw angle of the target is obtained by corresponding solution,

φ1representing pitch angle theta1The roll angle of the target obtained by corresponding solution is obtained,

φ2representing pitch angle theta2The roll angle, R, of the target obtained by solving the time correspondence31、R32、R33The three elements in the third row of the rotation matrix R solved in expression (two),

R21the first element of the second row in the rotation matrix R solved in expression (two),

R11the first element of the first row in the rotation matrix R obtained by solving in the expression (II);

a sin denotes an arcsine function, and a tan2 denotes an arctan function.

The target attitude comprises three included angles of a target body coordinate system and an inertial coordinate, namely a roll angle, a pitch angle and a yaw angle, and can be obtained through the formula (III).

In the step C, the acceleration of the target drone is obtained by the following formula (four):

a=[ax,ay,az]T(IV)

Where, a represents the acceleration of the target,

axrepresents an acceleration component in the X-axis direction in the inertial coordinate system,

ayrepresents the acceleration component in the Y-axis direction in the inertial coordinate system,

azrepresents the acceleration component in the vertical direction, az=0

Wherein g represents the acceleration of gravity;

theta represents the pitch angle of the target unmanned aerial vehicle obtained by solving in the formula (III),

solving the roll angle of the obtained target unmanned aerial vehicle in the expression (III),

solving the yaw angle of the obtained target unmanned aerial vehicle in the expression psi (III).

Preferably, when tracking the target, the unmanned aerial vehicle controls itself to be in a horizontal plane parallel to the target, and sets the target to stably fly in the horizontal plane.

In said step D, the actual position and velocity of the target are obtained by the following formula (five),

wherein, KkRepresenting Kalman gain, γkRepresenting a binary random variable for simulating intermittent measurements, if an object is detected in the k-th frame imageγk1, if the target drone is not detected in the k-th frame image, γk=0;

wkRepresenting the process noise, w, corresponding to the k-th frame imagek-1Representing the corresponding process noise of the k-1 frame image,

indicating the state quantity corresponding to the k frame image estimated based on the k-1 frame image,

the state quantity corresponding to the estimated optimal k-1 frame image, namely X,

representing the state quantity corresponding to the optimal k frame image obtained by estimation, namely X;

Zkrepresenting the measurement quantity corresponding to the k frame image, namely Z;

a represents a process matrix and H represents an observation matrix;

p denotes the position of the object, v denotes the velocity of the object, a denotes the acceleration of the object, h denotes the sampling period of the image, preferably 25Hz, I3Representing a three-dimensional identity matrix.

By the method, the target state estimation corresponding to each image can be obtained when each frame of image is obtained, so that the speed of the unmanned aerial vehicle can be controlled accordingly, the distance between the target and the unmanned aerial vehicle is ensured to be kept within a certain range, such as within 30 meters, the target is gradually close to the target or the distance between the target and the target is kept constant, and the cloud deck and the camera can be helpful for capturing the target more clearly.

Examples

Selecting a low-slow small target to move on a plane at the speed of 12 m/s, wherein the motion track is shown by a solid line in fig. 4, tracking the low-slow small target by an unmanned aerial vehicle carrying cloud deck and a camera, finding the target after the unmanned aerial vehicle enters a search mode, judging and knowing that the target depth is 23 m by continuous 5 frames of images with the target, and then entering a tracking mode, namely, the unmanned aerial vehicle enters the tracking mode at the moment of 0, wherein the tracking mode lasts for more than 6 seconds, the cloud deck is controlled to rotate by the following formula (I) in the tracking mode, and the target is lost within 200ms during the period, and the cloud deck is controlled by a control instruction corresponding to the image of the previous frame of the target during the loss period;

wherein the content of the first and second substances,indicating the desired rotational angular velocity, k, of the headpmaxRepresenting the maximum value of the pixel deviation term weight, kpminRepresenting the minimum value of the weight of the pixel deviation term, t representing the time after entering the tracking mode, err representing the pixel deviation, kdmaxRepresenting the maximum value of the pixel deviation variation rate weight, kdminRepresenting the minimum value of the pixel deviation rate of change weight,indicating the rate of change of prime bias.

In the tracking mode, the drone controls itself to follow the target and maintain a fixed distance, which is the distance between the drone and the target when entering the tracking mode, namely 23 meters. Specifically, the unmanned aerial vehicle obtains the position and speed information of the target in real time through the following steps:

step A, extracting 4 characteristic points from the target in each frame of image, and obtaining a rotation matrix according to the pixel coordinates of the characteristic points,

step B, obtaining the attitude of the target through the rotation matrix,

step C, obtaining the acceleration of the target through the attitude of the target,

and D, acquiring the actual position and speed of the target through the acceleration of the target.

Wherein the rotation parameter of the target is obtained by the following formula (two):

wherein R represents a rotation matrix, i.e. for deriving from an orthogonal coordinate system OaXaYaZaTo the camera coordinate system OcXcYcZcA 3 × 3 rotation matrix for conversion, wherein 9 parameters in the rotation matrix are also called rotation parameters;

r' represents an arbitrary rotation matrix, the third column [ R ] thereof7 r8 r9]TEqual to the rotation axis Za and R' satisfies the orthogonal constraint of the rotation matrix;

rotating shaft Representing point Pi0Point of orientation Pj0Vector of (c), Pi0Pj0I represents a point Pi0Point of orientation Pj0Modulo of the vector of (a);

two points P can be solved by extracting the pixel coordinates of 4 characteristic points from the target in each frame imagei0And point Pj0To determine the rotation axis Za in equation (two), i.e. [ r ]7 r8 r9]T

rot (Z, alpha) represents that the rotation angle of the target around the Z axis is alpha;

c=cosα,s=sinα;

r1to r9Each representing each element of an arbitrary 3 x 3 rotation matrix R', a third column R7 r8 r9]TEqual to the rotation axis Za.

In the step B, the posture of the target is obtained by the following formula (three):

wherein, theta1Represents the pitch angle of the target in the range of

θ2Represents the pitch angle of the target when theta1When the angle of pitch of the unmanned aerial vehicle is larger than 90 degrees or smaller than-90 degrees2It is shown that,

ψ1representing pitch angle theta1The yaw angle of the target is obtained by corresponding solution,

ψ2representing pitch angle theta2The yaw angle of the target is obtained by corresponding solution,

φ1representing pitch angle theta1The roll angle of the target obtained by corresponding solution is obtained,

φ2representing pitch angle theta2The roll angle of the target obtained by corresponding solution is obtained,

R31、R32、R33the three elements in the third row of the rotation matrix R solved in expression (two),

R21the first element of the second row in the rotation matrix R solved in expression (two),

R11the first element of the first row in the rotation matrix R obtained by solving in the expression (II);

asin denotes the arcsine function and atan2 denotes the arctan function.

The target attitude comprises three included angles of a target body coordinate system and an inertial coordinate, namely a roll angle, a pitch angle and a yaw angle, and can be obtained through the formula (III).

In the step C, the acceleration of the target drone is obtained by the following formula (four):

a=[ax,ay,az]T(IV)

Where, a represents the acceleration of the target,

axrepresents an acceleration component in the X-axis direction in the inertial coordinate system,

ayrepresents the acceleration component in the Y-axis direction in the inertial coordinate system,

azrepresents the acceleration component in the vertical direction, az=0

Wherein g represents the acceleration of gravity;

theta represents the pitch angle of the target unmanned aerial vehicle obtained by solving in the formula (III),

solving the roll angle of the obtained target unmanned aerial vehicle in the expression (III),

solving the yaw angle of the obtained target unmanned aerial vehicle in the expression psi (III).

Preferably, when tracking the target, the unmanned aerial vehicle controls itself to be in a horizontal plane parallel to the target, and sets the target to stably fly in the horizontal plane.

In said step D, the actual position and velocity of the target are obtained by the following formula (five),

wherein, KkRepresenting Kalman gain, γkRepresenting a binary random variable used to simulate intermittent measurements, gamma if an object is detected in the kth frame imagek1, if the target drone is not detected in the k-th frame image, γk=0;

wkRepresenting the process noise, w, corresponding to the k-th frame imagek-1Representing the corresponding process noise of the k-1 frame image,

indicating the state quantity corresponding to the k frame image estimated based on the k-1 frame image,

the state quantity corresponding to the estimated optimal k-1 frame image, namely X,

representing the state quantity corresponding to the optimal k frame image obtained by estimation, namely X;

Zkrepresenting the measurement quantity corresponding to the k frame image, namely Z;

a represents a process matrix and H represents an observation matrix;

p denotes the position of the object, v denotes the velocity of the object, a denotes the acceleration of the object, h denotes the sampling period of the image, preferably 25Hz, I3Representing a three-dimensional identity matrix.

The target track obtained by the method in the first 5 seconds in the tracking mode is selected, the track is compared with the target real track, the obtained track deviation condition is shown in fig. 2 and fig. 3, and the obtained observation target position track is shown in a dotted line in fig. 4.

As can be seen from fig. 2 and 3, when the pan/tilt and the camera enter the tracking mode for about 1s, the target can be basically within the error range of 0.1 (the total upper and lower errors are-1 to 1), and the situation that the target cannot be identified due to excessive motion blur (i.e. the pixel deviation is 0) does not occur; in addition, in the tracking mode, when the target is lost in the middle, the control command corresponding to the previous frame image of the target loss is continuously used for controlling the holder, so that the target can be captured again within 200ms, and the tracking can be continuously carried out.

As can be seen from fig. 4, the observed target trajectory substantially coincides with the real target trajectory, the deviation between the two trajectories is small, and the observed trajectory can be used to characterize the real trajectory.

The present invention has been described above in connection with preferred embodiments, which are merely exemplary and illustrative. On the basis of the above, the invention can be subjected to various substitutions and modifications, and the substitutions and the modifications are all within the protection scope of the invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:具有追光功能的阳光房

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类