Intelligent combat confrontation training system and method for team tactics

文档序号:131993 发布日期:2021-10-22 浏览:23次 中文

阅读说明:本技术 一种分队战术智能对抗训练系统及方法 (Intelligent combat confrontation training system and method for team tactics ) 是由 韩云武 林基灿 宋岩 刘旭 于 2021-06-26 设计创作,主要内容包括:本发明公开一种分队战术智能对抗训练系统及方法,通过在一方采用真人真枪,另外一方采用靶机配合激光发射器,可以实现实弹演习,通过调配靶机起倒数量和时机以及激光发射器的发射时机,模拟真实对战场景,比现有技术更具有真实性,更具有作战指导意义,适用于军事指导。(The invention discloses a team tactics intelligent confrontation training system and method, one party adopts a real man real gun, the other party adopts a target drone to match with a laser emitter, the practice of the live ammunition can be realized, the real battle scene is simulated by allocating the starting and reversing number and the opportunity of the target drone and the emitting opportunity of the laser emitter, and the system has more authenticity and operational guidance significance compared with the prior art and is suitable for military guidance.)

1. The utility model provides a formation tactics intelligence confrontation training system which characterized in that: the system comprises a video acquisition unit, a holder, a laser emission unit, a central control system, a target drone unit, a target unit and an obstacle unit;

the video acquisition unit, the holder, the laser emission unit, the central control system and the target unit are combined into a combined system to form a simulation countermeasure with the target unit, the target unit is a real person gun holder, and the target unit are respectively provided with a plurality of targets and targets;

the video acquisition unit and the laser emission unit are arranged on the holder;

the video acquisition unit, the laser emission unit, the holder and the target drone unit are all electrically connected with and controlled by the central control system.

2. The team tactical intelligent confrontation training system of claim 1, wherein: an SSD target tracking detection algorithm model is arranged in the central control system, and an ROS system publisher is added in the SSD target tracking detection algorithm model.

3. The team tactical intelligent confrontation training system of claim 1, wherein: the cloud platform is 360 rotatory, video acquisition unit and laser emission unit still self-rotation.

4. A training method using the system of any one of claims 1-3, characterized by: the method comprises the steps of target identification and tracking, intelligent striking and defense processes; wherein the content of the first and second substances,

in the stage of target identification and tracking, an SSD target tracking detection algorithm model is adopted to capture and calculate coordinates of a target, the occupied area of target pixels, the moving speed of the target, the moving acceleration information of the target and the density of the target are calculated, and then the comprehensive threat degree of the target is obtained, which is expressed as follows:

the comprehensive threat degree calculation formula is as follows:

W=w1s+w2v+w3a+w4ρ

w represents the comprehensive threat degree of the target, s represents the area occupied by the target pixel, v represents the relative moving speed of the target, a represents the relative moving acceleration of the target, rho represents the target density under a specific area, the specific area refers to the circle coverage area taking the target as the center of the circle and one hundredth of the camera pixel as the radius, and W represents the total moving speed of the target, and the moving speed of the target is calculated according to the target density1、w2、w3And w4Respectively representing corresponding weighting coefficients;

sorting according to the W value of each target, and striking the target with the maximum threat degree firstly;

in the intelligent striking and defense stage, a holder and a laser emitter are mobilized to strike the target with the maximum comprehensive threat degree firstly, the target missile information is received within the set time after the laser emission, the attack is considered to be successful, the next target is struck in sequence, if the target missile information is not received within the set time, the target is considered not to be struck, and the tracking and striking are continued;

meanwhile, whether the target drone has the hitting and defending capabilities is judged according to the number of the medium projectile rings and the total number of the medium projectiles at a certain time:

when the number of the target drone catapults is larger than a set value, the target drone is considered to fall down to simulate death, and the target drone serves as another live fighter after being erected;

when the number of the medium projectile rings of the target drone is less than or equal to a set value and the total number of the medium projectiles is less than the set value, the target drone is considered to be slightly damaged and still has the fighting capacity;

when the number of the projectile rings in the target drone is less than or equal to a set value and the total number of the projectiles is greater than or equal to the set value, the target drone is considered to be damaged heavily, and the target drone still has an opportunity to fight although falling down;

when the average time for the target drone to stand and fall at the same position is less than the set value, the opposite side is considered to be strong in fire power, the number of the target drone stands and the frequency of the target drone standing after being knocked down are increased correspondingly, and the number of the target drone stands and the frequency drone standing after being knocked down are reduced at other positions.

5. Training method according to claim 4, characterized in that:

the calculation method of the occupied area of the target pixel comprises the following steps:

S10the area of the pixel occupied by the front image of the male with the standard weight of 170cm and height outside the camera 10 m, SiThe pixel area occupied by the current target, k is an adjustment parameter, and k is 100S200/S10,S200The area of the pixel is occupied by the front image of a male with the camera 200 meters out, height 170cm and standard weight.

6. Training method according to claim 4, characterized in that:

the method for calculating the relative moving speed of the target comprises the following steps:

vmaxtaking world record speed, v, of hectometer athleteiThe moving speed of the current target;

the method for calculating the relative movement acceleration of the target comprises the following steps:

amaxtaking a world record of the acceleration of the hectometer athlete at the time of starting, aiIs the moving acceleration of the current target.

7. Training method according to claim 4, characterized in that:

the method for calculating the target density under the specific area comprises the following steps:

j is the total target number for a particular area.

8. Training method according to claim 4, characterized in that: the method for judging whether the drone is shot comprises the following steps:

comparing the RGB values of all pixels of two frames of images at the front and the back moments, if the RGB values are the same, the bullet does not exist, and if the RGB values are different, the area occupied by the different pixel values is within 0.5-2 times of the diameter of the bullet, the bullet is considered to be a medium bullet.

9. Training method according to claim 4, characterized in that:

when the comprehensive threat degree of the target is calculated, the number of the targets captured by each frame of image is calculated, and the number n of the standing targets is determined according to the number m of the targets:

is a rounded up symbol.

10. Training method according to claim 4, characterized in that: setting a delay time t from the stand of the target machine until the laser emitter emits:

ti=ti-1+(10-b)(tj-ti-1)/10

tithe delay time for hitting the target after the target drone stands; when i is 1, the initialization delay time is set, and b is the highest ring number of the bullet holes on the current target drone; t is tjT is the time taken for the last drone to stand up and be knocked down when j is 1j=0。

Technical Field

The invention relates to the technical field of military training artificial intelligence, in particular to an intelligent confrontation system method for training a squad tactical training for a drill, which is applied to military drill training.

Background

The tactical confrontation training system for the army squad is a simulation training system for tactical cooperation and united confrontation, and the main functions of the system comprise functions of battlefield simulation, battle strategy verification, battle effect evaluation and the like. The invention of the tactical confrontation training system aims to improve the capability of the commander in the teamwork to organize the command and the team in the cooperative operation.

The tactical confrontation training system relates to the actual-gun and live-bullet battle, and the shooting training of military police, especially the shooting training in the battlefield environment, is the fundamental factor for improving the fighting capacity. As the safety risk of live-action shooting is higher, the shooting training of the military police always mainly takes static fixed target shooting training as the main part, and only when needed, the training is carried out on live-action and one-way live-action ground. The practice of live ammunition is that, for safety, one party of the attacker and the defender utilizes live ammunition, while the other party is virtual, so that training in the background of real ammunition cannot be obtained. Although the practice exercises are carried out, the actual battlefield experience is not much. At present, with the deep development of military technology, army operations gradually develop towards the direction of selecting to attack and coordinate actions, the assessment of fighting capacity is not simple and simple according to weapon performance, the simple accumulation of injury, and the exercise also needs to highlight the requirements of operation combination, coordination and integration. The current exercises can not reproduce real battlefield environment, the flexibility is low, the battlefield environment changes little, thinking is easy to form, and the judgment of the severity of the problem is wrong.

The operational command concept is a creative leading art for combat operations of soldiers on the basis of science and experience, and comprises various aspects of strategic decision, military force deployment, tactical application, fighter utilization and the like. The combat experience is the product of a combat activity, which is a perceptual cognition, law assurance and method summary developed by the military or finger fighters in long-term combat practice. Therefore, it is very meaningful to develop an intelligent countermeasure system method for tactical training in a team as a practice exercise guide.

Disclosure of Invention

Therefore, the invention provides a system and a method for tactical intelligent confrontation training in a team. The system and the method utilize the video acquisition unit to acquire the information of all the targets of the other party in the visual field range, convert the image information into digital information through processing, calculate and analyze the digital information to acquire the instruction information of the high-precision laser emission unit, thereby striking the targets; according to the distribution condition of the attacking parties, the defending parties are mobilized to simulate defending forces, strategic mobilizing deployment is carried out, a live-action confrontation environment is provided for military training, and the operational technical level is greatly improved. The system and the method have important guiding significance in the aspects of improving the safety, flexibility and high efficiency of the live-action firing training of the real soldiers, improving the combat effectiveness evaluation of troops, improving the ability of the troops to cope with emergencies, improving the ability of actual combat and the like.

The technical scheme adopted by the invention is as follows:

an intelligent combat training system for team tactics is provided with a video acquisition unit, a holder, a laser emission unit, a central control system, a target drone unit, a target unit and an obstacle unit;

the video acquisition unit, the holder, the laser emission unit, the central control system and the target unit are combined into a combined system to form a simulation countermeasure with the target unit, the target unit is a real person gun holder, and the target unit are respectively provided with a plurality of targets and targets;

the video acquisition unit and the laser emission unit are arranged on the holder;

the video acquisition unit, the laser emission unit, the holder and the target drone unit are all electrically connected with and controlled by the central control system.

Further, an SSD target tracking detection algorithm model is arranged in the central control system, and an ROS system publisher is added in the SSD target tracking detection algorithm model.

Further, the holder rotates 360 degrees, and the video acquisition unit and the laser emission unit also rotate per se.

An intelligent combat training method for a team tactics comprises the processes of target identification and tracking, intelligent striking and defense; wherein the content of the first and second substances,

in the stage of target identification and tracking, an SSD target tracking detection algorithm model is adopted to capture and calculate coordinates of a target, the occupied area of target pixels, the moving speed of the target, the moving acceleration information of the target and the density of the target are calculated, and then the comprehensive threat degree of the target is obtained, which is expressed as follows:

the comprehensive threat degree calculation formula is as follows:

W=w1s+w2v+w3a+w4ρ

w represents the comprehensive threat degree of the target, s represents the area occupied by the target pixel, v represents the relative moving speed of the target, a represents the relative moving acceleration of the target, rho represents the target density under a specific area, the specific area refers to the circle coverage area taking the target as the center of the circle and one hundredth of the camera pixel as the radius, and W represents the total moving speed of the target, and the moving speed of the target is calculated according to the target density1、w2、w3And w4Respectively representing corresponding weighting coefficients;

sorting according to the W value of each target, and striking the target with the maximum threat degree firstly;

in the intelligent striking and defense stage, a holder and a laser emitter are mobilized to strike the target with the maximum comprehensive threat degree firstly, the target missile information is received within the set time after the laser emission, the attack is considered to be successful, the next target is struck in sequence, if the target missile information is not received within the set time, the target is considered not to be struck, and the tracking and striking are continued;

meanwhile, whether the target drone has the hitting and defending capabilities is judged according to the number of the medium projectile rings and the total number of the medium projectiles at a certain time:

when the number of the target drone catapults is larger than a set value, the target drone is considered to fall down to simulate death, and the target drone serves as another live fighter after being erected;

when the number of the medium projectile rings of the target drone is less than or equal to a set value and the total number of the medium projectiles is less than the set value, the target drone is considered to be slightly damaged and still has the fighting capacity;

when the number of the projectile rings in the target drone is less than or equal to a set value and the total number of the projectiles is greater than or equal to the set value, the target drone is considered to be damaged heavily, and the target drone still has an opportunity to fight although falling down;

when the average time for the target drone at the same position to stand down is less than a set value, the opposite side is considered to have strong firepower, the number of the standing target drone at the position and the frequency of the target drone standing again after being knocked down are correspondingly increased, and the number of the standing target drone at other positions and the frequency of the target drone standing again after being knocked down are reduced;

further, the calculation method of the area occupied by the target pixel comprises the following steps:

s10the area of the pixel occupied by the front image of the male with the standard weight of 170cm and height outside the camera 10 m, SiThe pixel area occupied by the current target, k is an adjustment parameter, and k is 100S200/S10,S200The area of the pixel is occupied by the front image of a male with the camera 200 meters out, height 170cm and standard weight.

Further, the target relative movement speed calculation method comprises the following steps:

vmaxtaking world record speed, v, of hectometer athleteiThe moving speed of the current target;

the method for calculating the relative movement acceleration of the target comprises the following steps:

amaxtaking a world record of the acceleration of the hectometer athlete at the time of starting, aiIs the moving acceleration of the current target.

Further, the method for calculating the target density in the specific area comprises the following steps:

j is the total target number for a particular area.

Further, the method for judging whether the drone is shot comprises the following steps:

comparing the RGB values of all pixels of two frames of images at the front and the back moments, if the RGB values are the same, the bullet does not exist, and if the RGB values are different, the area occupied by the different pixel values is within 0.5-2 times of the diameter of the bullet, the bullet is considered to be a medium bullet.

Further, when the comprehensive threat degree of the target is calculated, the number of the targets captured by each frame of image is calculated, and the standing number n of the target drone is determined according to the number m of the targets:

is a rounded up symbol.

Further, a delay time t from the stand of the target drone until the laser emitter emits is set:

ti=ti-1+(10-b)(tj-ti-1)/10

tithe delay time for hitting the target after the target drone stands; when i is 1, the initialization delay time is set, and b is the highest ring number of the bullet holes on the current target drone; t is tjT is the time taken for the last drone to stand up and be knocked down when j is 1j=0。

Compared with the prior art, the innovation of the invention is embodied in the following aspects:

1. the invention realizes the actual combat scene of the live ammunition exercise. In the existing practice, firstly, a non-live-ammunition part is only a live-ammunition part which is continuously scanned by continuous laser, accurate attack is not realized, a continuous laser scanning surface is visible and predictable, and authenticity cannot be simulated; secondly, the threat degree of an attacking party (live ammunition party) to a defending party is not accurately calculated; thirdly, the time and angle of the laser scanning of the guardian are independent of the speed and posture of the guardian, the time and angle are separated, the laser and the target (dummy) of the guardian are not fused into a whole, and the laser and the target cannot be simulated into a certain real warrior in dynamic operation. In the invention, one party can use real bullet and the other party can use the target and laser to act together to simulate the real person holding the gun, so the invention has reality and pertinence to confrontation.

2. Real-time countermeasure mechanisms for targets (dummy) and real persons are proposed. In the prior art, firstly, a target is only placed at a designated position in advance and cannot move, and an opposite side (a real person) can know the position of the target in advance by memory and aim at the direction, so that the reaction speed and the flexible maneuvering capability cannot be exercised, and the war practice is not met; secondly, the position of a single target is fixed, the action is fixed, and the complement positions and the interaction of a plurality of soldiers at the same position cannot be simulated. According to the invention, a plurality of targets are set, and according to the number of attacking parties, the targets randomly stand up according to a certain number and fall down randomly, so that the opposite party cannot determine the positions of the targets by means of memory. After the target is knocked down, the target can be randomly erected again according to the principle of military strength confrontation, other imaginary enemies in the same place can be simulated, uncertainty in a real battlefield can be simulated, and a fighter can feel the actual battlefield environment in training. In addition, the invention also adds the scene simulation of the cooling gun, and even if the target falls down, the target can simulate the fighting posture of falling down the cooling gun (namely the cooling gun in the general sense).

3. The concept of tunable countermeasure time is presented. Because the response speed of the machine is faster than that of a person, a delay time attack concept is provided, after the target is erected, a certain time is left for the other party, and if the other party does not see or hit the target, the other party is attacked by laser after the delay time. Then, the laser emission attack time is gradually adjusted according to the training condition, and the response speed of the warriors is improved.

4. The target identification system not only has a target capture function, but also has a target counting function, when the infrared camera identifies an enemy target, the counting function is triggered, the number of the target aircraft standing (the number of fighting team personnel) is calculated accordingly, and the target aircraft does not fall down blindly.

5. When the target is identified and tracked, after the weight parameters of the neural network layer are adjusted through multiple times of training, the coordinate information, the speed information and the threat degree information of the enemy target to the party are calculated, the enemy target is numbered and sequenced, and the enemy target is continuously locked and accurately tracked until the target casualty disappears, so that the efficiency and the accuracy of the striking are improved.

6. Training times and fighter fighting capacity improvement conditions are intelligently recorded, adjustment compensation amount between the standing and launching of the laser emitter from the target drone is flexibly adjusted according to requirements, delay time is changed to simulate reaction capacity of defenders in confrontation, and the shorter the delay time is, the stronger the reaction capacity of the defenders is.

7. And (4) determining the injury degree by quantitatively analyzing the bullet-receiving condition of the target surface, and determining whether the target surface has the impact resistance. Meanwhile, the enemy fire concentration point is judged according to the shot direction, and counterattack is concentrated according to the enemy fire concentration point, so that the counterattack target is clear and has pertinence.

8. The target drone determines the standing positions and quantity according to the quantity and the direction of the opposite side in the visual field continuously according to the scene change, and simulates the allocation of personnel and the arrangement of force in actual combat.

Drawings

Fig. 1 is a diagram of the intelligent combat training system for the squad tactics of the present invention.

Fig. 2 is a diagram of hardware components of the intelligent countermeasure training system according to the embodiment.

Fig. 3 is a diagram showing an interlocking relationship between the functional units.

FIG. 4 is a block diagram of an SSD object tracking detection algorithm.

FIG. 5 is a diagram of a target tracking behavior tree structure.

Detailed Description

The invention is described in detail below with reference to the figures and examples. However, it should be understood by those skilled in the art that the following examples are not intended to limit the scope of the present invention, and any equivalent changes or modifications made within the spirit of the present invention should be considered as falling within the scope of the present invention.

As shown in figure 1, the invention provides an intelligent combat training system for a team tactics, wherein simulated combat training is carried out under the system, one party is a real person holding a real gun live ammunition, and the other party is a dummy holding gun by combining a target and a laser emitter in a linkage manner. The system and the practice process will be described below assuming that the real person holds the gun as an attacking party and the target drone as a defending party, but the real person holds the gun as a defending party and the target drone as an attacking party may be used as an example.

The system mainly comprises a video acquisition unit 1, a holder 2, a laser emission unit 3, a central control system 4, a target drone unit 5, a target unit 6 and an obstacle unit 7.

(1) Video acquisition unit

The video acquisition unit 1 adopts a 360-degree all-dimensional video collector, adopts a high-definition camera, and is matched with a high-definition wide-angle infrared camera with a focal length of 5mm and a high-definition wide-angle infrared camera with a focal length of 6 mm. The video acquisition device is arranged in front of the holder 2 so as to acquire a complete image picture. The video acquisition unit 1 is electrically connected with the central control system 4 and sends image information to the central control system 4.

(2) Cloud platform

The cloud platform 2 is provided with a rotating platform which can flexibly rotate for 360 degrees and is a reliable carrier for the video acquisition unit 1 and the laser emission unit 3, and the video acquisition unit 1 and the laser emission unit 3 can also rotate. The holder 2 is electrically connected with the central control system 4 and receives an instruction from the central control system 4 to move, so that the angle is adjusted, and target locking and accurate striking of the laser emitter are achieved.

(3) Laser emitting unit

The laser emission unit 3 is arranged on the rotating platform of the holder 2, and can emit 360-degree omnibearing scenes under the action of the rotating platform. The laser emitter adopted by the laser emitting unit 3 is a green high-power laser with a thick cloud-flying beam, can continuously output stable laser, has a long range which can reach 1km, can strike enemy targets in a large range, has a function of adjusting the size of a long-distance beam, can adapt to a harsh working temperature, requires direct-current wide voltage for power supply, can adapt to power supply of more power supplies, has strong compatibility, and can complete output action according to a command sent by a processor.

The laser emission unit 3 is electrically connected with the central control system 4 and receives an emission instruction from the central control system 4.

(4) Central control unit 4

And the central control system 4 is used as a control core of the training system and cooperatively controls various electric control actions in a training scene. The central control system 4 is electrically connected with the video acquisition unit 1, the cradle head 2, the laser emission unit 3, the target drone unit 5 and the like, receives acquisition information from the video acquisition device, makes a decision through analysis and processing, enables the laser emitter to find an accurate position by controlling the steering of the cradle head 2, controls the laser emitter to emit to a target, receives target position information from the target drone unit 5 at the same time, and controls the target drone to stand or lie down.

The central control system 4 is a huge processing system, adopts the manihard 2 as a core processor, and is internally provided with functions such as an image processing module, an ROS system, a target recognition module, a tracking module, and a decision module.

The central control system 4 has an image processing function, and can quickly calculate the image information acquired by the video acquisition unit 1 in the system. And the central control system 4 receives the power square target information acquired by the video acquisition unit 1, performs matrix analysis and coordinate calculation to obtain a central point coordinate signal, reads the stored information in the library functions in the SSD target tracking detection algorithm, performs coordinate processing and transformation, and calculates the image information such as the central coordinate of the image, the occupied pixel size and the like.

The central control system 4 has excellent processing capability and response speed, and is flexible in expansion. The method comprises the steps of establishing an ROS system, realizing data transmission among algorithms written by different languages through a publisher and a subscriber which are established by the ROS system, transmitting obtained information such as target positions and the like from an image tracking model to a decision-making module, carrying out intelligent analysis after the decision-making algorithm is stably operated, finally outputting accurate striking instructions to a high-precision launching cradle head and a target machine controller, mutually matching a high-precision laser launching cradle head and a target machine, and carrying out attack and defense through combined work.

(5) Drone unit and target unit

The target drone unit 5 is provided with a plurality of target drone, the target unit 6 is a plurality of real persons holding guns, and the target drone unit 5 and the target unit 6 form a fighting counter cube. The target unit 6 can adopt live firing or electric bullets; each target drone stands or lies down under the instruction of the central control system 4, the real person defense is simulated, and the laser emitter simulates the attack of the real person, so that the target drone and the laser emitter are linked to realize a complete simulation action of the gun-holding real person. A camera is erected below each target drone and used for shooting the target surface bullet-bearing condition, and one target drone falls down to simulate death and then stands up again to represent another power of vitality.

In the linkage task of the target drone and the laser emitter, the time delay from the standing or lying of the target drone to the emission of the laser emitter can be adjusted to control the training difficulty, the standing time of the target drone is adjusted to train the response capability of soldiers, and the standing number and frequency of the target drone in different directions are adjusted to simulate the deployment and the movement of forces of a keeper.

The invention provides an intelligent fighting training method for tactics in teams, which is based on the system, a video acquisition device continuously shoots, a target tracking module extracts coordinate information in an SSD target tracking detection algorithm model when a target enters a shooting range, conversion from relative coordinates to geodetic coordinates is realized so as to be consistent with the coordinates of a laser transmitter, central coordinate information and speed information of the target are calculated and obtained, numbering and sequencing are carried out, then the threat degree to the local is judged through a decision algorithm, an optimization searching scheme is generated, and the target is continuously locked and accurately tracked until target casualties are hit.

The specific embodiment is as follows:

1. object recognition

The target recognition comprises the collection and processing of images and the publishing of information.

(1) Image acquisition and processing

The invention adopts an SSD target tracking detection algorithm model to capture the target and calculate the coordinates. The image processing module marks the target by using a color marking rectangular prediction frame, marks a label and the target similarity degree of the label, extracts coordinate information in an SSD target tracking detection algorithm model from the acquired position information of the enemy target on the basis, realizes the conversion from a relative coordinate to a geodetic coordinate, and outputs a parameter capable of visually reflecting the target threat degree.

The SSD target tracking detection algorithm model takes a VGG-16 network as a basic model and is added with a detection mode of an RPN network characteristic pyramid. The SSD target tracking detection algorithm combines the regression idea of YOLO and the Anchor boxes mechanism in fast-RCNN, overcomes the defect of detecting a network only by using a single-layer feature layer target detection algorithm by comprehensively utilizing the characteristic of complementary advantages of each layer of a multi-scale feature layer, and is simpler in SSD and higher in accuracy. Compared with YOLO which uses a full connection layer and can only accept fixed-size input, the SSD uses the full convolution layer and abandons the full connection layer, so that the SSD can accept input of pictures with any size and generate output with corresponding size. And the final detection effect is superior to that of a single characteristic layer target detection algorithm. The network structure is as follows:

(1) the feedforward neural network uses VGG, and a final full connection layer is abandoned;

(2) a plurality of convolution layers are connected behind the feedforward neural network to provide different characteristic diagrams;

(3) for each point on a specific characteristic diagram, generating default boxes (referred to as prior boxes hereinafter) with specific quantity and specific size by setting corresponding parameters, and detecting different targets in the characteristic diagram;

(4) setting a convolution kernel, and generating a category score and a prediction coordinate corresponding to each prior frame by performing convolution on the feature map, wherein the length of a prediction vector generated by each point is (the number of classification plus the number of coordinates) × the number of the prior frames of the point;

(5) and finally screening a proper prediction box through FAST NMS non-maximum inhibition, and labeling.

In order to improve the accuracy of target tracking detection of the system, the weight coefficient and the adjustment parameter of the SSD target tracking detection algorithm model logistic regression equation are corrected, and the equation is as follows:

wherein z is an objective function (i.e., an optimal solution), x is picture input information, w is a weight coefficient, b is an adjustment parameter,for the predicted values in the training set, a is sigmoid activation function, y is tag value (0,1), and Loss (a, y) is logistic regression Loss function.

The correction method comprises the following steps: firstly, a data set is established by collecting a large number of target photos with different postures in different scenes in an early stage, and the data set is divided into a training set and a testing set. And (3) labeling the target selected in the training set and the label of the target by using labelme labeling software, and converting the output json file into a csv data file which can be identified by a neural network. And then training for many times on a working machine with strong image processing capacity of the GPU. And obtaining a plurality of weight coefficients and adjustment parameters through a plurality of times of training of the neural network model, and finally selecting the weight coefficients and the adjustment parameters which most accord with the target detection effect through comparison of the test set and the training set.

And correcting the SSD target tracking detection model to finally obtain a model capable of accurately capturing the target and continuously tracking, and obtaining a prediction frame through the model, wherein the prediction frame comprises accuracy and a category label thereof.

Therefore, the system continuously shoots through the infrared camera, the weight coefficient and the adjusting parameter of the neural network are adjusted through multiple times of training, the regression equation is established, the target identification module can generate category scores and prediction coordinates corresponding to each prior frame through convolution of the feature map according to the feature values extracted from the training set, each point generates the length of a prediction vector, and finally a proper prediction frame is screened out through non-maximum value inhibition. And generating an executable file after the SSD target tracking detection algorithm model is trained, wherein the executable file comprises an image coordinate storage function boxes function.

And then, analyzing the obtained image information by an image processor, extracting coordinates of a feature map under a prediction frame according to the size information of the prediction frame in the boxes function, analyzing and calculating to obtain required information:

xmin,ymin

xmax,ymax

xmin,yminis the lower left of the feature mapAngular coordinate, xmax,ymaxFor the coordinates of the upper right corner of the feature map, the area can be obtained:

S=(xmax-xmin)*(ymax-ymin)

coordinates of the center point:

C0x=(xmin+xmax)/2

C0y=(ymin+ymax)/2

C0xis a coordinate in the x direction, C0yIs the y-direction coordinate.

The obtained coordinates are relative coordinates, and in practical application, the coordinates are converted into coordinate systems the same as the holder, the reference point can be searched for and converted into geodetic coordinates, and the conversion method belongs to common knowledge and is not repeated herein. The coordinates in the following description of the present invention are all expressed as geodetic coordinates.

(2) Image information distribution

The image information is issued through an ROS system issuer node after being processed and converted, and the data is subscribed by a subscriber node embedded in a decision control algorithm, so that data interaction among different algorithm frames is realized, and instruction information is input to a target tracking and striking module.

In order to realize the communication function among different algorithms, the invention adds the publisher module of the ROS system in the SSD target tracking detection algorithm module for the first time, and can publish the identified image information, the number of captured targets, the calculated pixel size, the central coordinate point, the comprehensive threat degree and other information as a topic.

The specific release conditions are as follows:

(1) setting topic information, and defining contents to be transmitted in a topic frame;

(2) creating topic information in a working space in advance, and writing image information identified by SSD, such as the number of captured targets, the calculated pixel size, the central coordinate point, the comprehensive threat degree and the like, into the topic information.

(3) And if the ROS is in a working state, releasing the specific topic information in the topic.

2. Target tracking

Based on an SSD target tracking detection algorithm, each labeled target can be tracked in real time, accurate tracking and locking of the target are achieved through a machine learning training system algorithm, real-time tracking is achieved, and real-time coordinates of the target are input into a central processor Manifold 2. The target tracking module receives target information input by the target identification module through the communication device, wherein the target information comprises the area size occupied by the target pixels, the target moving speed and the moving acceleration information, and the area size occupied by the weighted target pixels, the moving speed and the change rate of the weighted target pixels are tracked in real time to obtain the target threat degree.

The comprehensive threat degree calculation formula is as follows:

W=w1s+w2v+w3a+w4ρ

wherein W represents the comprehensive threat degree of the target, s represents the area occupied by the target pixel, v represents the relative moving speed of the target, a represents the relative moving acceleration of the target, and rho represents the target density in a specific area; w is a1、w2、w3And w4And weighting coefficients respectively representing the size of the area occupied by the target pixel, the relative moving speed, the relative acceleration and the density under a specific area. At the early stage, w is taken1=w2=w3=w41/4, the coefficient distribution can be adjusted according to different training purposes but the sum of the four coefficients is always kept to 1. A larger value of W indicates a higher threat level for the tracked target.

The calculation method of the occupied area of the target pixel comprises the following steps:

S10the area of the pixel occupied by the front image of the male with the standard weight of 170cm and height outside the camera 10 m, SiK is an adjustment parameter for the pixel area occupied by the current target, and the value is 100S200/S10,S200The area of the pixel is occupied by the front image of a male with the camera 200 meters out, height 170cm and standard weight.

The method for calculating the relative moving speed of the target comprises the following steps:

vmaxtaking world record speed, v, of hectometer athleteiIs the moving speed of the current target.

The method for calculating the relative movement acceleration of the target comprises the following steps:

amaxtaking a world record of the acceleration of the hectometer athlete at the time of starting, aiIs the moving acceleration of the current target.

The method for calculating the target density in the specific area comprises the following steps:

ρ represents the density of the targets in a specific area, j is the number of all targets in the specific area, and the specific area refers to: and taking the tracking target as a circle center and one hundredth of camera pixels as the coverage range of a circle with the radius.

And calculating the W values of all targets under a certain time, wherein the larger the value is, the higher the threat degree of the target is, and the priority is given to the target.

3. Hit and/or defense decisions

A. Percussion decision

And the intelligent defense module is used for sequencing and numbering the targets from small to large according to the obtained comprehensive threat degrees (namely the size of W) of all the targets, and sending the coordinates of the targets to the high-precision laser emission holder according to the numbers for striking. In the striking process, shot information of enemy personnel is received within a set time (preferably 0.5 second) after laser is emitted (an infrared receiver vest worn by the enemy personnel receives a laser signal and feeds the received signal back to a central control system through wireless transmission), and then the next target is struck in sequence after the attack is considered to be successful. If no feedback signal is received within 0.5 second, the target is considered not to be hit, and the target is continuously tracked and hit.

B. Defense decisions

And when the target is tracked and hit, a fire distribution diagram of the other side is drawn according to the bullet receiving condition of the target surface of the target on the side, and then the battlefield environment is analyzed according to the fire distribution to make corresponding defense measures. Through erectting the camera in every target surface below, judge the target surface bullet condition through detecting target surface bullet hole size and color change. The system extracts the bullet hole information of different target surfaces, obtains the bullet receiving degrees of the target surfaces at different positions, performs overall firepower analysis according to the bullet receiving degrees, performs quantitative processing, and accordingly obtains a firepower distribution diagram, and for the positions with concentrated firepower, the system can erect more target machines to strengthen attack or control the target machines to fall down, so that intelligent defense is achieved.

Firstly, whether a bullet exists is judged by comparing the colors of two successive frames of pictures, if the RGB values of all pixels are the same, the bullet does not exist, and if the RGB values of all pixels are different, the area occupied by the different pixel values is within 0.5-2 times of the diameter of the bullet (the directions of the bullet ejected into the target surface are different, the bullet holes are different), the bullet is considered to be the middle bullet.

The center position of the target can be determined through the edge detection of the target, the number of rings can be determined through the difference of colors of different ring numbers, when the target bounces and the number of rings is larger than a set value (such as 8 rings), the target is considered to be hit at an important part, the target falls down to simulate death, and then whether the target stands up again to serve as other live fighters is determined according to the total calculation of the system;

if the target is shot in the middle, but the number of rings is less than or equal to the set value and the total number of the shot in the middle is less than the set value (preferably 3), the target is considered to be shot in the middle, but the target is judged to be slightly damaged, and the battle can still be continued;

if the target has medium shots, but the number of rings is less than or equal to the set value and the total number of the medium shots is more than or equal to the set value, the target is judged to be damaged heavily, and the target can still fight again, such as falling down a cooling gun.

When the average time for the targets to fall down after standing at the same position is less than a second set value (which can be adjusted according to the exercise difficulty in the actual exercise process), the opposite side is determined to have strong firepower, the intensity deployment needs to be strengthened, the number of the standing targets and the frequency of the targets standing again after being knocked down are correspondingly increased, and the number of the standing targets at other positions and the frequency of the targets standing again after falling down are reduced. Otherwise, the fire of the other party is considered to be weaker and the aid is not increased.

Further, when the target tracking module calculates the comprehensive risk degree according to the identified target, the number of targets captured by each frame of image is calculated, and the number n of standing targets is calculated according to the number m of enemy targets, and since the attack-defense ratio is generally 3:1 (which is a general ratio and can be adjusted according to the training difficulty degree), the number of standing targets can be determined by calculation:

n is the number of the target drone to be erected; m is the number of the targets in each frame of the tracked image;is a rounded up symbol.

Further, in the countermeasure process, after the target drone stands, the power party may give a fire hit before the target drone emits laser light, the delay from the target drone stand to the laser emitter emitting reflects the countermeasure responsiveness of the target party, and the longer the delay time, the weaker the responsiveness of the target person is. At the moment, the damage condition of the drone is considered, and a reasonable delay time t is given according to the following formulai

ti=ti-1+(10-b)(tj-ti-1)/10

tiThe delay time for hitting the target after the target drone stands; when i is 1, the initialization delay time (preferably 2 seconds) is obtained, and b is the highest ring number of the bullet holes on the current drone aircraft; t is tjT is the time taken for the last drone to stand up and be knocked down when j is 1j0. I.e. the reaction timeThe shooting speed of the target shooting person of the opposite party is increased, the shooting precision of the target shooting person is increased, and the shooting time is increased.

The simulated training system and the method provided by the invention can truly simulate the battle scene of real gun live ammunition on a battlefield, although one of the real guns can not use a real man real gun, the gun holding effect of the real man is realized through the intelligent combination of the target drone and the laser emitter, the flowing battle scene of the fighter is simulated through the continuous inverted change of the target drone, and the flexible shooting effect of the real man is simulated through the 360-degree rotation emission of the laser emitter. Therefore, the invention has practical significance.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:射箭释放辅助装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!