Invisible light aiming method

文档序号:154500 发布日期:2021-10-26 浏览:22次 中文

阅读说明:本技术 一种不可见光瞄准方法 (Invisible light aiming method ) 是由 刘伟 苏文斌 刘洪涛 黄军凯 谢百明 牧灏 李波 陈俊卫 孙博 张莉蔷 胡全 于 2021-07-26 设计创作,主要内容包括:本发明公开了一种不可见光瞄准方法,该方法包括步骤:视觉传感器采集图像信息传输至智能图像处理器和显示控制器;显示控制器向协作控制平台发送控制指令,协作控制平台控制视觉传感器运动,当视觉传感器识别到目标物时,将图像信息传输至智能图像处理器;智能图像处理器分析和采用BP神经网络识别图像信息并解算,定位目标物坐标,并转换为协作控制平台控制指令;协作控制平台根据按照控制指令,控制不可见光瞄准目标物。本发明能够通过基于数据分析的非监督性学习,确定目标物位置坐标,协作控制平台根据位置坐标控制不可见光瞄准目标对象,以此代替人工指示光瞄准。因此,能够有效解决人眼瞄准容易出现瞄准偏差且因疲劳导致易受干扰等问题。(The invention discloses an invisible light aiming method, which comprises the following steps: the vision sensor collects image information and transmits the image information to the intelligent image processor and the display controller; the display controller sends a control instruction to the cooperative control platform, the cooperative control platform controls the visual sensor to move, and when the visual sensor identifies a target object, image information is transmitted to the intelligent image processor; the intelligent image processor analyzes and adopts a BP neural network to identify image information and resolve, positions the coordinates of the target object and converts the coordinates into a cooperative control platform control instruction; and the cooperative control platform controls the invisible light to aim at the target object according to the control instruction. According to the invention, the position coordinates of the target object can be determined through unsupervised learning based on data analysis, and the cooperative control platform controls the invisible light to aim at the target object according to the position coordinates, so that manual indication light aiming is replaced. Therefore, the problems that aiming deviation is easy to occur in human eye aiming, and the human eye aiming is easy to be interfered due to fatigue and the like can be effectively solved.)

1. An invisible light aiming method is characterized in that: the method comprises the following steps:

the method comprises the following steps that firstly, a vision sensor collects image information and transmits the image information to an intelligent image processor and a display controller in a wireless mode;

secondly, the display controller sends a control instruction to the cooperative control platform, the cooperative control platform controls the visual sensor to move, and when the visual sensor identifies a target object, image information is transmitted to the intelligent image processor;

analyzing, identifying and resolving image information by the intelligent image processor, positioning a target object coordinate, converting the target object coordinate into a cooperative control platform control instruction, and performing image identification on the obtained image by the intelligent image processor by adopting a BP (back propagation) neural network to obtain a target object coordinate;

and step four, the cooperation control platform controls the invisible light to aim at the target object according to the control instruction.

2. The invisible light aiming method as claimed in claim 1, wherein: converting the invisible light aiming position into a screen coordinate, and carrying out position aiming through the screen coordinate. And the invisible light aiming position corresponds to the screen coordinate with the position of the target action point, so that the calculated coordinate information is used as the input of the neural network, the screen coordinate is the network output, the number of the hidden layer neurons is set to be 10, and the single hidden layer screen coordinate BP neural network is constructed.

3. The invisible light aiming method as claimed in claim 2, wherein: the concrete method of the fourth step is as follows: the constructed BP neural network model trains the existing labeled data, and the method mainly comprises the following steps:

(1) initializing a network, initializing the number of nodes of each layer of neurons, connecting weights among the neurons of each layer, a hidden layer threshold value and an output layer threshold value, and giving a learning rate and an excitation function;

(2) hidden layer output calculation; calculating the hidden layer output H by equation (2)j

(3) Output layer output calculation; calculating the output layer output O by equation (3.31)k

(4) Calculating an error; output O from network predictionkAnd the desired output YkAnd calculating the error of the neural network.

(5) Updating the weight value; according to the neural network error, the weight is updated through the formula (3.33) (3.34).

(6) And (4) judging whether the algorithm reaches the iteration times or reaches a preset error, and if not, returning to the step (2).

4. A method of sighting invisible light according to claim 3, wherein: the activation function adopted by the BP neural network model is a Sigmoid function, and is shown as formula 1.

5. The invisible light aiming method as claimed in claim 4, wherein: hidden layer output HjThe calculation formula is as follows:

in the formula (2), l is the number of hidden layer nodes; f is the hidden layer excitation function; omegaijThe weight value between the input layer and the hidden layer; a isjFor hidden layer threshold, xiAre input parameters.

6. The invisible light aiming method as claimed in claim 5, wherein: output layer output OkThe calculation formula is as follows:

in the formula (3), HjNumber of output layer nodes; omegajkThe weight between the hidden layer and the output layer; bkIs the output layer threshold.

7. The invisible light aiming method as claimed in claim 6, wherein: error calculation output O from network predictionkAnd the desired output YkCalculating the error of the neural network:

in the formula (4), E is desired, YkTo desired output, OkIs the actual output.

8. The invisible light aiming method as claimed in claim 5, wherein: updating the weight value according to the error of the neural network, and updating the weight value through the formulas (5) and (6):

ωjk′=ωjk+ηHjek j=1,2,…,l;k=1,2,…,m (6)

in the formula (6), η is the learning efficiency, ωij' and omegajk' is a new connection weight, at which point the error will propagate backwards and make connection weight and threshold adjustments until the accuracy requirement is met.

Technical Field

The invention relates to an invisible light aiming method, and belongs to the technical field of smart power grids.

Background

When the invisible light is aimed, coaxial indicating light is usually needed, but under the severe weather condition, the indicating light is not easy to distinguish by naked eyes, and under the remote condition, the target object is small and not easy to capture. This way of aiming by the human eye through the indication light indication easily causes aiming deviation.

Disclosure of Invention

The technical problem to be solved by the invention is as follows: the invisible light aiming method is provided to solve the technical problems in the prior art.

The technical scheme adopted by the invention is as follows: a method of invisible light aiming, the method comprising the steps of:

the method comprises the following steps: the vision sensor collects image information and transmits the image information to the intelligent image processor and the display controller in a wireless mode;

step two: the display controller sends a control instruction to the cooperative control platform, the cooperative control platform controls the visual sensor to move, and when the visual sensor identifies a target object, image information is transmitted to the intelligent image processor;

analyzing, identifying and resolving image information by the intelligent image processor, positioning a target object coordinate, converting the target object coordinate into a cooperative control platform control instruction, and performing image identification on the obtained image by the intelligent image processor by adopting a BP (back propagation) neural network to obtain a target object coordinate;

step four: and the cooperative control platform controls the invisible light to aim at the target object according to the control instruction.

And converting the invisible light aiming position into a screen coordinate, and carrying out position aiming through the screen coordinate. And the invisible light aiming position, the invisible light aiming position and the target object action point position are relative to the screen coordinate. Therefore, the calculated coordinate information is used as the input of the neural network, the screen coordinate is the network output, the number of the hidden layer neurons is set to be 10, and the single hidden layer screen coordinate BP neural network is constructed.

A large amount of existing labeled data are trained by constructing a BP neural network model, and the method mainly comprises the following steps:

(1) network initialization, the number of nodes of each layer of neurons, the connection weight among the neurons of each layer, the thresholds of a hidden layer and an output layer, and given learning rate and an excitation function.

(2) Hidden layer output calculation;

(3) output layer output calculation;

(4) calculating an error;

(5) updating the weight value;

(6) and (4) judging whether the algorithm reaches the iteration times or reaches a preset error, and if not, returning to the step (2).

The invention has the beneficial effects that: compared with the prior art, the method can determine the position coordinates of the target object through unsupervised learning based on data analysis, and the cooperation control platform controls the invisible light to aim at the target object according to the position coordinates, so that manual indication light aiming is replaced. Therefore, the problems that aiming deviation is easy to occur in human eye aiming, and the human eye aiming is easy to be interfered due to fatigue and the like can be effectively solved.

Drawings

FIG. 1 is a diagram of a desired neuron architecture;

FIG. 2 is a diagram of a BP neural network;

fig. 3 is a screen coordinate BP neural network diagram.

Detailed Description

The invention is further described with reference to the accompanying drawings and specific embodiments.

Example 1: the invisible light aiming system comprises a vision sensor, a display controller, a cooperative control platform and an intelligent image processor. The vision sensor, the intelligent image processor, the display controller and the cooperation control platform are networked in a wireless mode. The vision sensor automatically collects scene image information, transmits the image information to the display and the intelligent image processor, the intelligent image processor identifies and calculates the image information, determines coordinate data of a target object in a scene, converts the coordinate data into a cooperative platform control instruction, and the cooperative control platform automatically aims at the target object.

A invisible light aiming method mainly comprises the steps that a vision sensor obtains image information of a target object; the intelligent image processor identifies, calculates, positions a target object, calculates the coordinates of the target object, and converts the coordinates into a motion control instruction of the cooperative control platform; and the cooperation control platform controls the invisible light to automatically aim at the target object.

In a third aspect, the present invention provides an invisible light aiming device, which mainly comprises: a vision sensor; a display controller; a collaboration control platform; an intelligent image processor. The intelligent image processor is responsible for identifying and calculating a target image acquired by the visual sensor, positioning a target object and calculating a coordinate position; and the cooperation control platform automatically controls the invisible light to aim at the target object according to the coordinate position instruction.

The intelligent image processor processes image information acquired by the vision sensor, positions the coordinate position of the target object in the scene and converts the coordinate position into a cooperative control platform control instruction; and the cooperation control platform controls the invisible light to aim at the target object. According to the invention, the position coordinates of the target object can be determined through unsupervised learning based on data analysis, and the cooperative control platform controls the invisible light to aim at the target object according to the position coordinates, so that manual indication light aiming is replaced. Therefore, the problems that aiming deviation is easy to occur in human eye aiming, and the human eye aiming is easy to be interfered due to fatigue and the like can be effectively solved.

Example 2: an invisible light aiming method comprises the following steps:

the method comprises the following steps: the vision sensor collects image information and transmits the image information to the intelligent image processor and the display controller in a wireless mode;

step two: the display controller sends a control instruction to the cooperative control platform, the cooperative control platform controls the visual sensor to move, and when the visual sensor identifies a target object, image information is transmitted to the intelligent image processor;

step three: the intelligent image processor analyzes, identifies and resolves image information, positions the coordinates of the target object and converts the coordinates into a cooperative control platform control instruction;

step four: and the cooperative control platform controls the invisible light to aim at the target object according to the control instruction.

And step three, carrying out image recognition on the acquired image by the intelligent image processor to obtain the coordinates of the target object, and realizing the method by adopting the following method.

The BP neural network mainly comprises an input layer, a hidden layer and an output layer, and a complete neural network model must comprise: the number of layers, the number of neurons in each layer, the initial value of network input, learning efficiency, expected output and the like. Each layer is composed of a plurality of neurons in parallel, the neurons in the same layer are not connected with each other, the neurons in the adjacent layer are all connected, the neuron model is shown in figure 1, and the structure diagram of the BP neural network is shown in figure 2.

In the context of figure 1 of the drawings,the activation function is a Sigmoid function, as shown in equation 1.

After the BP neural network topological structure is constructed, learning and training of the BP neural network are started, so that the network has self-learning. For the BP neural network, the learning process is divided into two parts, namely information forward transmission and error back propagation. In the forward propagation process of the signal, the transfer relationship from the input layer to the hidden layer is as follows:

in the formula (2), the first and second groups,number of nodes of hidden layer; f is the hidden layer excitation function; omegaijThe weight value between the input layer and the hidden layer; a isjFor hidden layer threshold, xiAre input parameters.

After the signal is processed by the hidden layer excitation function, it is transmitted to the output layer, and its transmission relationship is

In the formula (3), HjNumber of output layer nodes; omegajkThe weight between the hidden layer and the output layer; bkFor the output layer threshold, m represents the number of input layers.

And after the signal is transmitted to the output layer, carrying out actual output calculation, if the expected output cannot be obtained, carrying out back propagation on the error, revising the weight value, and carrying out iterative calculation until the actual output meets the error. The mean square error of the error is calculated by the formula

In the formula (4), E is desired, YkTo desired output, OkIs the actual output.

Updating network connection weight omegaij、ωjkThe modification direction must be selected as the current error reduction direction, which is calculated as

ωjk'=ωjk+ηHjek j=1,2,…,l;k=1,2,…,m (6)

In the formula (6), η is learning efficiency. Omegaij' and omegajk' is a new connection weight, at which point the error will propagate backwards and make connection weight and threshold adjustments until the accuracy requirement is met. The error back propagation correction weights are essentially repeated training of the samples.

The BP neural network comprises the following specific implementation steps: and converting the invisible light aiming position into a screen coordinate, and carrying out position aiming through the screen coordinate. And the invisible light aiming position, the invisible light aiming position and the target object action point position are relative to the screen coordinate. Therefore, the calculated coordinate information is used as the input of the neural network, the screen coordinate is the network output, the number of the hidden layer neurons is set to be 10, and the single hidden layer screen coordinate BP neural network is constructed, as shown in fig. 3.

A large amount of existing labeled data are trained by constructing a BP neural network model, and the method mainly comprises the following steps:

(1) and (5) initializing the network. Initializing the node number of each layer of neurons, the connection weight value among the neurons of each layer, the threshold value of a hidden layer and an output layer, and giving a learning rate and an excitation function.

(2) The hidden layer outputs the computation. Calculating the hidden layer output H by equation (2)j

(3) The output layer outputs the calculation. Calculating output layer output O by equation (3)k

(4) Error of the measurementAnd (4) calculating. Output O from network predictionkAnd the desired output YkAnd calculating the error of the neural network.

(5) And updating the weight value. And updating the weight values through formulas (5) and (6) according to the neural network error.

(6) And (4) judging whether the algorithm reaches the iteration times or reaches a preset error, and if not, returning to the step (2).

The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and therefore, the scope of the present invention should be determined by the scope of the claims.

8页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种PDS托盘智能识别定位系统及其工作方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!