Method and apparatus for determining UAV Attitude information

文档序号:1756158 发布日期:2019-11-29 浏览:19次 中文

阅读说明:本技术 用于确定无人机姿态信息的方法和装置 (Method and apparatus for determining UAV Attitude information ) 是由 门春雷 刘艳光 巴航 张文凯 徐进 韩微 郝尚荣 郑行 陈明轩 于 2018-05-21 设计创作,主要内容包括:本申请实施例公开了用于确定无人机姿态信息的方法和装置。该方法的一具体实施方式包括:获取拍摄目标无人机所得到的无人机图像;将无人机图像输入预先训练的特征点检测模型,得到与目标无人机对应的目标特征点坐标序列,其中,特征点检测模型用于表征包括无人机的图像与特征点坐标序列之间的对应关系;获取用于表征目标无人机的三维模型的三维特征点坐标序列;基于目标特征点坐标序列和三维特征点坐标序列,求解透视N点定位问题,得到目标无人机的姿态信息。该实施方式实现了根据无人机图像确定无人机的姿态信息。(The embodiment of the present application discloses the method and apparatus for determining UAV Attitude information.One specific embodiment of this method includes: to obtain the obtained unmanned plane image of photographic subjects unmanned plane;By unmanned plane image input characteristic point detection model trained in advance, target feature point coordinate sequence corresponding with UAV targets is obtained, wherein characteristic point detection model is used to characterize the corresponding relationship between image and characteristic point coordinate sequence including unmanned plane;Obtain the three-dimensional feature point coordinate sequence for characterizing the threedimensional model of UAV targets;Based on target feature point coordinate sequence and three-dimensional feature point coordinate sequence, perspective N point location problem is solved, the posture information of UAV targets is obtained.The embodiment realizes the posture information that unmanned plane is determined according to unmanned plane image.)

1. a kind of method for determining UAV Attitude information, comprising:

Obtain the obtained unmanned plane image of photographic subjects unmanned plane;

By unmanned plane image input characteristic point detection model trained in advance, mesh corresponding with the UAV targets is obtained Mark characteristic point coordinate sequence, wherein the characteristic point detection model is used to characterize image and characteristic point coordinate including unmanned plane Corresponding relationship between sequence;

Obtain the three-dimensional feature point coordinate sequence for characterizing the threedimensional model of the UAV targets;

Based on the target feature point coordinate sequence and three-dimensional feature point coordinate sequence, perspective N point location problem is solved, is obtained To the posture information of the UAV targets.

2. according to the method described in claim 1, wherein, the characteristic point detection model includes the first convolutional neural networks and the Two convolutional neural networks;And

The characteristic point detection model that unmanned plane image input is trained in advance, obtains corresponding with the UAV targets Target feature point coordinate sequence, comprising:

By first convolutional neural networks of unmanned plane image input training in advance, obtain and the UAV targets couple The fisrt feature point coordinate sequence answered;

By second convolutional neural networks of first area image input training in advance, obtain corresponding with the UAV targets Second feature point coordinate sequence, wherein the first area image is the figure of the first predeterminable area of the unmanned plane image Picture;

According to the fisrt feature point coordinate sequence and second feature point coordinate sequence, the target feature point coordinate is generated Sequence.

3. according to the method described in claim 2, wherein, the characteristic point detection model further includes third convolutional neural networks; And

The characteristic point detection model that unmanned plane image input is trained in advance, obtains corresponding with the UAV targets Target feature point coordinate sequence, further includes:

According to the fisrt feature point coordinate sequence and second feature point coordinate sequence, generates the target feature point and sit Mark sequence before, by second area image input in advance training the third convolutional neural networks, obtain with the target without Man-machine corresponding third feature point coordinate sequence, wherein the second area image is the second default of the unmanned plane image The image in region;And

It is described according to the fisrt feature point coordinate sequence and second feature point coordinate sequence, generate the target feature point Coordinate sequence, comprising:

According to the fisrt feature point coordinate sequence, second feature point coordinate sequence and third feature point coordinate sequence Column, generate the target feature point coordinate sequence.

4. according to the method described in claim 3, wherein, it includes described that first predeterminable area, which is in the unmanned plane image, The region of the port wing of UAV targets, port tailplane and undercarriage.

5. according to the method described in claim 4, wherein, it includes described that second predeterminable area, which is in the unmanned plane image, The region of the starboard wing of UAV targets, starboard tailplane and undercarriage.

6. according to the method described in claim 1, wherein, the characteristic point detection model includes Volume Four product neural network;With And

The characteristic point detection model that unmanned plane image input is trained in advance, obtains corresponding with the UAV targets Target feature point coordinate sequence, comprising:

The unmanned plane image is inputted into the Volume Four product neural network, it is special to obtain target corresponding with the UAV targets Sign point coordinate sequence.

7. a kind of for determining the device of UAV Attitude information, comprising:

First acquisition unit is configured to obtain the obtained unmanned plane image of photographic subjects unmanned plane;

Input unit is configured to inputting the unmanned plane image into characteristic point detection model trained in advance, obtain with it is described The corresponding target feature point coordinate sequence of UAV targets, wherein the characteristic point detection model includes unmanned plane for characterizing Image and characteristic point coordinate sequence between corresponding relationship;

Second acquisition unit is configured to obtain the three-dimensional feature point coordinate of the threedimensional model for characterizing the UAV targets Sequence;

Unit is solved, is configured to solve based on the target feature point coordinate sequence and three-dimensional feature point coordinate sequence N point location problem is had an X-rayed, the posture information of the UAV targets is obtained.

8. device according to claim 7, wherein the characteristic point detection model includes the first convolutional neural networks and the Two convolutional neural networks;And

The input unit includes:

First input module is configured to input the unmanned plane image first convolutional neural networks of training in advance, Obtain fisrt feature point coordinate sequence corresponding with the UAV targets;

Second input module is configured to input first area image second convolutional neural networks of training in advance, obtains To second feature point coordinate sequence corresponding with the UAV targets, wherein the first area image is the unmanned plane The image of first predeterminable area of image;

Generation module is configured to be generated according to the fisrt feature point coordinate sequence and second feature point coordinate sequence The target feature point coordinate sequence.

9. device according to claim 8, wherein the characteristic point detection model further includes third convolutional neural networks; And

The input unit further include:

Third input module is configured to according to the fisrt feature point coordinate sequence and second feature point coordinate sequence Column, before generating the target feature point coordinate sequence, by the third convolution mind of second area image input training in advance Through network, third feature point coordinate sequence corresponding with the UAV targets is obtained, wherein the second area image is institute State the image of the second predeterminable area of unmanned plane image;And

The generation module is further configured to:

According to the fisrt feature point coordinate sequence, second feature point coordinate sequence and third feature point coordinate sequence Column, generate the target feature point coordinate sequence.

10. device according to claim 9, wherein first predeterminable area is in the unmanned plane image including institute State the port wing of UAV targets, the region of port tailplane and undercarriage.

11. device according to claim 10, wherein second predeterminable area is in the unmanned plane image including institute State the starboard wing of UAV targets, the region of starboard tailplane and undercarriage.

12. device according to claim 7, wherein the characteristic point detection model includes Volume Four product neural network;With And

The input unit is further configured to:

The unmanned plane image is inputted into the Volume Four product neural network, it is special to obtain target corresponding with the UAV targets Sign point coordinate sequence.

13. a kind of electronic equipment, comprising:

One or more processors;

Storage device is stored thereon with one or more programs;

When one or more of programs are executed by one or more of processors, so that one or more of processors Realize such as method as claimed in any one of claims 1 to 6.

14. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor Now such as method as claimed in any one of claims 1 to 6.

Technical field

The invention relates to field of computer technology, and in particular to for determine UAV Attitude information method and Device.

Background technique

In the unmanned plane Autonomous landing stage, other than accurate location estimation, accurately Attitude estimation is also most important, relationship The safety and intelligent level landed to unmanned plane.Inertial measurement component and view are relied primarily on to the estimation of UAV Attitude at present Feel the means such as cooperation mark positioning.

Summary of the invention

The embodiment of the present application proposes the method and apparatus for determining UAV Attitude information.

In a first aspect, the embodiment of the present application provides a kind of method for determining UAV Attitude information, this method packet It includes: obtaining the obtained unmanned plane image of photographic subjects unmanned plane;By unmanned plane image input characteristic point detection trained in advance Model obtains target feature point coordinate sequence corresponding with UAV targets, wherein characteristic point detection model, which is used to characterize, includes Corresponding relationship between the image and characteristic point coordinate sequence of unmanned plane;Obtain the threedimensional model for characterizing UAV targets Three-dimensional feature point coordinate sequence;Based on target feature point coordinate sequence and three-dimensional feature point coordinate sequence, perspective N point location is solved Problem obtains the posture information of UAV targets.

In some embodiments, characteristic point detection model includes the first convolutional neural networks and the second convolutional neural networks; And the characteristic point detection model that the input of unmanned plane image is trained in advance, obtain target feature point corresponding with UAV targets Coordinate sequence, comprising: by the first convolutional neural networks of unmanned plane image input training in advance, obtain corresponding with UAV targets Fisrt feature point coordinate sequence;By the second convolutional neural networks of first area image input training in advance, obtain and target The corresponding second feature point coordinate sequence of unmanned plane, wherein first area image is the first predeterminable area of unmanned plane image Image;According to fisrt feature point coordinate sequence and second feature point coordinate sequence, target feature point coordinate sequence is generated.

In some embodiments, characteristic point detection model further includes third convolutional neural networks;And by unmanned plane image Input characteristic point detection model trained in advance, obtains target feature point coordinate sequence corresponding with UAV targets, further includes: According to fisrt feature point coordinate sequence and second feature point coordinate sequence, before generating target feature point coordinate sequence, by the Two area images input third convolutional neural networks trained in advance, obtain third feature point coordinate corresponding with UAV targets Sequence, wherein second area image is the image of the second predeterminable area of unmanned plane image;And according to fisrt feature point coordinate Sequence and second feature point coordinate sequence, generate target feature point coordinate sequence, comprising: according to fisrt feature point coordinate sequence, Second feature point coordinate sequence and third feature point coordinate sequence generate target feature point coordinate sequence.

In some embodiments, the first predeterminable area is port wing, the left tail in unmanned plane image including UAV targets The region of the wing and undercarriage.

In some embodiments, the second predeterminable area is starboard wing, the right tail in unmanned plane image including UAV targets The region of the wing and undercarriage.

In some embodiments, characteristic point detection model includes Volume Four product neural network;And it is unmanned plane image is defeated Enter characteristic point detection model trained in advance, obtains target feature point coordinate sequence corresponding with UAV targets, comprising: by nothing Man-machine image input Volume Four product neural network, obtains target feature point coordinate sequence corresponding with UAV targets.

Second aspect, the embodiment of the present application provide a kind of for determining the device of UAV Attitude information, the device packet Include: first acquisition unit is configured to obtain the obtained unmanned plane image of photographic subjects unmanned plane;Input unit is configured At the characteristic point detection model that the input of unmanned plane image is trained in advance, obtains target feature point corresponding with UAV targets and sit Mark sequence, wherein characteristic point detection model is used to characterize corresponding between image and characteristic point coordinate sequence including unmanned plane Relationship;Second acquisition unit is configured to obtain the three-dimensional feature point coordinate sequence of the threedimensional model for characterizing UAV targets Column;Unit is solved, is configured to that it is fixed to solve perspective N point based on target feature point coordinate sequence and three-dimensional feature point coordinate sequence Position problem, obtains the posture information of UAV targets.

In some embodiments, characteristic point detection model includes the first convolutional neural networks and the second convolutional neural networks; And input unit includes: the first input module, is configured to input unmanned plane image the first convolutional Neural of training in advance Network obtains fisrt feature point coordinate sequence corresponding with UAV targets;Second input module is configured to first area Second convolutional neural networks of image input training in advance, obtain second feature point coordinate sequence corresponding with UAV targets, Wherein, first area image is the image of the first predeterminable area of unmanned plane image;Generation module is configured to according to the first spy Sign point coordinate sequence and second feature point coordinate sequence, generate target feature point coordinate sequence.

In some embodiments, characteristic point detection model further includes third convolutional neural networks;And input unit also wraps Include: third input module is configured to generate target according to fisrt feature point coordinate sequence and second feature point coordinate sequence Before characteristic point coordinate sequence, by second area image input third convolutional neural networks trained in advance, obtain with target without Man-machine corresponding third feature point coordinate sequence, wherein second area image is the figure of the second predeterminable area of unmanned plane image Picture;And generation module is further configured to: according to fisrt feature point coordinate sequence, second feature point coordinate sequence and third Characteristic point coordinate sequence generates target feature point coordinate sequence.

In some embodiments, the first predeterminable area is port wing, the left tail in unmanned plane image including UAV targets The region of the wing and undercarriage.

In some embodiments, the second predeterminable area is starboard wing, the right tail in unmanned plane image including UAV targets The region of the wing and undercarriage.

In some embodiments, characteristic point detection model includes Volume Four product neural network;And input unit is further It is configured to: by unmanned plane image input Volume Four product neural network, obtaining target feature point corresponding with UAV targets and sit Mark sequence.

The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes: one or more processing Device;Storage device is stored thereon with one or more programs, when said one or multiple programs are by said one or multiple processing When device executes, so that said one or multiple processors realize the method as described in implementation any in first aspect.

Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence, wherein the method as described in implementation any in first aspect is realized when the computer program is executed by processor.

Method and apparatus provided by the embodiments of the present application for determining UAV Attitude information, by by photographic subjects without Man-machine obtained unmanned plane image input feature vector point detection model obtains target feature point coordinate sequence, then special based on target The three-dimensional feature point coordinate sequence of sign point coordinate sequence and the threedimensional model for characterizing UAV targets, it is fixed to solve perspective N point Position problem, obtains the posture information of UAV targets.To realize the posture information for determining unmanned plane according to unmanned plane image, Enrich the method for determination of the posture information of determining unmanned plane.

Detailed description of the invention

By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:

Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;

Fig. 2 is the flow chart according to one embodiment of the method for determining UAV Attitude information of the application;

Fig. 3 is the flow chart according to another embodiment of the method for determining UAV Attitude information of the application;

Fig. 4 is the structural representation according to one embodiment of the device for determining UAV Attitude information of the application Figure;

Fig. 5 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.

Specific embodiment

The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.

It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.

Fig. 1 is shown can be using the application for determining the method for UAV Attitude information or for determining unmanned plane The exemplary system architecture 100 of the embodiment of the device of posture information.

As shown in Figure 1, system architecture 100 may include unmanned plane 101,102,103, wireless network 104 and ground control Equipment 105.Wireless network 104 between unmanned plane 101,102,103 and ground control 105 to provide communication link Medium.Wireless network 104 can include but is not limited to 3G/4G/5G connection, WiFi connection, bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection and other currently known or exploitation in the future wireless connection types.

User can be used ground control 105 and be interacted by wireless network 104 with unmanned plane 101,102,103, with Receive or send message etc..Various telecommunication customer end applications, such as photo acquisition class can be installed on ground control 105 Using, UAV Attitude estimation class application, Landing Guidance System (Landing Guidance System) etc..Wherein, ground is drawn Landing Guidance System can be run by leading equipment 105, accurate to the offer of unmanned plane 101,102,103 by wireless network 104 The guidance informations such as Lu Fangwei, glide path and distance, unmanned plane 101,102,103 is according to these information line up with runway and by given Gliding angle marches into the arena and lands, to guarantee the deviation of grounding point within the scope of regulation.

It should be noted that for determining the method for UAV Attitude information generally by ground provided by the embodiment of the present application Face guides equipment 105 to execute, correspondingly, for determining that the device of UAV Attitude information is generally positioned at ground control In 105.

It should be understood that the number of unmanned plane, wireless network and ground control in Fig. 1 is only schematical.Root It factually now needs, can have any number of unmanned plane, wireless network and ground control.

With continued reference to Fig. 2, it illustrates a realities according to the method for determining UAV Attitude information of the application Apply the process 200 of example.The method for being used to determine UAV Attitude information, comprising the following steps:

Step 201, the obtained unmanned plane image of photographic subjects unmanned plane is obtained.

In the present embodiment, for determining executing subject (such as the ground shown in FIG. 1 of the method for UAV Attitude information Guide equipment) the available obtained unmanned plane image of photographic subjects unmanned plane.

Here, unmanned plane image, which can be, is sent to above-mentioned hold with other electronic equipments of above-mentioned executing subject network connection Row main body, in this way, above-mentioned executing subject can be by wired connection mode or radio connection from other above-mentioned electronics Equipment obtains unmanned plane image.For example, other above-mentioned electronic equipments can be the camera of the image of photographic subjects unmanned plane.When upper When stating camera and having taken the image of UAV targets, captured image can be sent to above-mentioned executing subject.It may be noted that Be, above-mentioned radio connection can include but is not limited to 3G/4G connection, WiFi connection, bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection and other currently known or exploitation in the future radio connections.

Here, it is local to be also possible to be stored in above-mentioned executing subject for unmanned plane image, in this way, above-mentioned executing subject can be with Unmanned plane image is locally extracted.

It should be noted that here, UAV targets are used only for exemplary illustration, and in practice, this is used to determine nobody The method of machine posture information can be applied to any specified unmanned plane.

Step 202, the characteristic point detection model that the input of unmanned plane image is trained in advance, obtains corresponding with UAV targets Target feature point coordinate sequence.

In the present embodiment, above-mentioned executing subject can instruct unmanned plane image input acquired in step 201 in advance Experienced characteristic point detection model obtains target feature point coordinate sequence corresponding with UAV targets.Wherein, features described above point is examined Model is surveyed to be used to characterize the corresponding relationship between image and characteristic point coordinate sequence including unmanned plane.

It should be noted that features described above point detection model, which can be, utilizes various machine learning methods and training sample Collection carries out obtained from Training existing machine learning model (such as various artificial neural networks etc.).Wherein, on The training sample for stating training sample concentration may include the obtained sample image of shooting unmanned plane and corresponding with the sample image Markup information, here, markup information corresponding with the sample image may include unmanned plane included in the sample image Characteristic point coordinate sequence.For example, the corresponding markup information of sample image can be obtained by manually marking.It is understood that It is that different unmanned planes has different appearance and structure feature, therefore, in which point for choosing unmanned plane as unmanned plane It, can be according to the specific appearance and structure feature of unmanned plane come concrete decision when characteristic point, that is, in manually mark sample image Characteristic point when, can be designed in advance the rule marked, which of sample image point is only the characteristic point for needing to mark.Example It such as, can be using the geometric center point of the nose region of unmanned plane, wing areas, undercarriage region, tail region etc. as needs The characteristic point of mark.Moreover, in training characteristics point detection model, included by the sample image in used training sample Unmanned plane preferably has same or similar appearance or structure feature with UAV targets, in this way, being based on above-mentioned trained sample The characteristic point detection model that this training is got is more readily detected out in the obtained unmanned plane image of photographic subjects unmanned plane Characteristic point.

In some optional implementations of the present embodiment, features described above point detection model may include Volume Four product mind Through network.In this way, step 202 can carry out as follows: by unmanned plane image input Volume Four product nerve acquired in step 201 Network obtains target feature point coordinate sequence corresponding with UAV targets.Here, Volume Four product neural network may include defeated Enter layer, convolutional layer, activation primitive layer, pond layer and full articulamentum.Wherein, input layer can be used for inputting unmanned plane image.Volume Lamination can be used for extracting characteristics of image, and pond layer can be used for carrying out down-sampling (Down Sample) to the information of input.Swash Function layer living uses various nonlinear activation functions (such as ReLU (Rectified Linear Units corrects linear unit) Function, Sigmoid function, Tanh (tanh) function etc.) NONLINEAR CALCULATION is carried out to the information of input.Full articulamentum is used for Two layers of connection to realize that the information to input carries out feature liter dimension or dimensionality reduction.

Step 203, the three-dimensional feature point coordinate sequence for characterizing the threedimensional model of UAV targets is obtained.

In the present embodiment, for determine UAV Attitude information executing subject can locally or remotely from Other electronic equipments for stating executing subject network connection obtain three-dimensional feature point for characterizing the threedimensional model of UAV targets Coordinate sequence.Characterize object threedimensional model can use a variety of methods, such as can with three-dimensional coordinate point sequence, curve or Patch indicates threedimensional model.Here it is possible to indicate the threedimensional model of UAV targets using three-dimensional coordinate point sequence.

It is understood that unmanned plane this body structure from after dispatching from the factory is relatively fixed, very big variation will not be generated, therefore, Unmanned plane manufacturer can provide the three-dimensional feature point coordinate sequence for characterizing the threedimensional model of unmanned plane when unmanned plane dispatches from the factory. In this way, the threedimensional model for characterizing UAV targets that the manufacturer of the above-mentioned available UAV targets of executing subject provides Three-dimensional feature point coordinate sequence.

In some optional implementations of the present embodiment, above-mentioned executing subject, which can also be supported to determine, to be had outside a variety of See and the posture information of the unmanned plane of structure feature, in this way, in step 202, by the photographic subjects obtained in step 201 nobody When the characteristic point detection model that the obtained unmanned plane image input of machine is trained in advance, need to carry out as follows:

Firstly, determining the corresponding unmanned plane type of UAV targets.

Here it is possible to the appearance and structure feature of the unmanned plane needed support previously according to above-mentioned executing subject and be provided with A variety of unmanned plane types.The appearance and structure of unmanned plane corresponding to every kind of unmanned plane type can be same or similar. Moreover, the unmanned plane of each and above-mentioned executing subject communication presets unmanned plane type corresponding to the unmanned plane.In this way, i.e. It can determine the corresponding unmanned plane type of UAV targets.

Then, characteristic point detection model corresponding with above-mentioned identified unmanned plane type is obtained.

Here it is possible in advance be every kind of unmanned plane type training in preset a variety of unmanned plane types and this kind of unmanned plane The corresponding characteristic point detection model of type.It is used in training characteristic point detection model corresponding with this kind of unmanned plane type Training sample set sample image can be shooting this kind of unmanned plane type the obtained image of unmanned plane, moreover, being directed to The mark characteristic point coordinate sequence for the sample image that training sample is concentrated is also according to for corresponding to this kind of unmanned plane type The characteristic point coordinate sequence of mark rule and mark.In this way, the characteristic point detection model trained for every kind of unmanned plane type It can detecte out the characteristic point shot in the obtained image of unmanned plane of this kind of unmanned plane type.

Finally, above-mentioned unmanned plane image is inputted above-mentioned acquired characteristic point detection model, obtain and UAV targets Corresponding target feature point coordinate sequence.

Here, due to above-mentioned acquired characteristic point detection model training for this kind of unmanned plane type, Above-mentioned unmanned plane image is inputted into above-mentioned acquired characteristic point detection model, it can more targetedly, to obtain and target The corresponding target feature point coordinate sequence of unmanned plane.

Step 204, it is based on target feature point coordinate sequence and three-dimensional feature point coordinate sequence, perspective N point location is solved and asks Topic, obtains the posture information of UAV targets.

In the present embodiment, above-mentioned executing subject can based on target feature point coordinate sequence obtained in step 202 and The point coordinate sequence of three-dimensional feature obtained in step 203 solves perspective N point location (PNP, Perspective-N-Point) and asks Topic, obtains the posture information of UAV targets.

Here, the posture information of UAV targets may include body of the UAV targets relative to UAV targets itself Pitch angle, yaw angle and the roll angle of coordinate system.

In practice, based on three-dimensional feature obtained in target feature point coordinate sequence obtained in step 202 and step 203 Point coordinate sequence, solves PNP problem, obtains the posture information of UAV targets, can specifically include:

(1), pass through the N number of three-dimensional feature point coordinate and step in three-dimensional feature point coordinate sequence acquired in step 203 Two dimensional character point coordinate in target feature point coordinate sequence obtained in rapid 202 solves PNP problem and obtains UAV targets' phase For shooting the posture information of the camera coordinates system of the camera of unmanned plane image.Wherein, N is positive integer, and in practice, usual N is Positive integer more than or equal to 4.

(2), the Camera extrinsic number for shooting the camera of unmanned image is obtained, that is, camera coordinates system and world coordinate system (example Such as, earth coordinates) between transition matrix.

(3), the conversion between the body coordinate system and world coordinate system (for example, earth coordinates) of UAV targets is obtained Matrix.

(4), according to the transition matrix and target between camera coordinates system and world coordinate system (for example, earth coordinates) Transition matrix between the body coordinate system and world coordinate system (for example, earth coordinates) of unmanned plane, determines body coordinate system Transition matrix between camera coordinates system.

(5), according to solving the camera coordinates system of obtained UAV targets relative to the camera of shooting unmanned plane image Transition matrix between posture information and body coordinate system and camera coordinates system determines UAV targets relative to body coordinate The posture information of system.

It should be noted that how to solve PNP problem is the prior art that this field is studied and applied extensively, herein no longer It repeats.

The method provided by the above embodiment of the application passes through the obtained unmanned plane image of photographic subjects unmanned plane is defeated Enter characteristic point detection model, obtains target feature point coordinate sequence, then based on target feature point coordinate sequence and for characterizing mesh The three-dimensional feature point coordinate sequence of the threedimensional model of unmanned plane is marked, perspective N point location problem is solved, obtains the appearance of UAV targets State information.To realize the posture information for determining unmanned plane according to unmanned plane image, the posture letter of determining unmanned plane is enriched The method of determination of breath.

With further reference to Fig. 3, it illustrates the streams for another embodiment for determining the method for UAV Attitude information Journey 300.This is used to determine the process 300 of the method for UAV Attitude information, comprising the following steps:

Step 301, the obtained unmanned plane image of photographic subjects unmanned plane is obtained.

In the present embodiment, for determining that (such as ground control shown in FIG. 1 is set the executing subject of UAV Attitude information It is standby) the available obtained unmanned plane image of photographic subjects unmanned plane.

Step 302, it by the first convolutional neural networks of unmanned plane image input training in advance, obtains and UAV targets couple The fisrt feature point coordinate sequence answered.

In the present embodiment, in order to detect that all characteristic points in unmanned plane image, above-mentioned executing subject can be by steps First convolutional neural networks of acquired unmanned plane image input training in advance, obtain corresponding with UAV targets in rapid 301 Fisrt feature point coordinate sequence.

It should be noted that the first convolutional neural networks are used to characterize image and characteristic point coordinate sequence including unmanned plane Between corresponding relationship.As an example, the first convolutional neural networks can be obtained according to following first training step is trained in advance:

The first step determines the network structure of initial first convolutional neural networks, and initial first convolutional Neural of initialization The network parameter of network.

Here, the executing subject of the first training step can be with the execution master for determining the method for UAV Attitude information Body is same or different.If identical, the executing subject of the first training step can obtain the first convolution nerve net in training After network, the parameter value of the network structure information of trained first convolutional neural networks and network parameter is stored in local.Such as Fruit is different, then the executing subject of the first training step can be after training obtains the first convolutional neural networks, by trained the The network structure information of one convolutional neural networks and the parameter value of network parameter are sent to for determining UAV Attitude information The executing subject of method.

Since convolutional neural networks are the neural networks of a multilayer, every layer is made of multiple two-dimensional surfaces, and each puts down Face is made of multiple independent neurons, then needs exist for determining which layer initial first convolutional neural networks include (for example, input Layer, convolutional layer, pond layer, excitation function layer, full articulamentum etc.), order of connection relationship and each layer between layers All include which parameter (for example, step-length of weight weight, bias term bias, convolution) etc..

Wherein, input layer can be used for inputting unmanned plane image.The image of required input can be determined for input layer Image size.

Convolutional layer can be used for extracting characteristics of image.How many convolution kernel can be determined for each convolutional layer, each The size of convolution kernel, the weight of each neuron in each convolution kernel, the corresponding bias term of each convolution kernel, adjacent two secondary volume Step-length between product, if need to fill, fill the numerical value (being usually filled with 0) etc. of how many pixel and filling.

And pond layer can be used for carrying out down-sampling (Down Sample) to the information of input, with compressed data and parameter Amount, reduce over-fitting.For each pond layer can determine the pond layer pond method (for example, take region averages or Person takes maximum regional value).

Excitation function layer is used to carry out NONLINEAR CALCULATION to the information of input.Tool can be determined for each excitation function layer The excitation function of body.For example, activation primitive can be the various mutation activation primitives of ReLU and ReLU, Sigmoid function, Tanh (tanh) function, Maxout function etc..

Full articulamentum is for connecting two layers, and all neurons between two layers connected all have the right to reconnect.Needle To each full articulamentum it needs to be determined that the neuron number of the preceding layer of the full articulamentum, the full articulamentum later layer nerve First number may thereby determine that the weight parameter number in the full articulamentum is I × J, wherein I is before the full articulamentum One layer of neuron number, J are the neuron number of the later layer of the full articulamentum.In practice, usually full articulamentum is in addition to packet I × J weight parameter is included for executing except full attended operation, can also include bias term and the excitation for carrying out NONLINEAR CALCULATION Function, and hence it is also possible to determine bias term parameter and used excitation function.

After the network structure that the first convolutional neural networks have been determined, the net of the first convolutional neural networks can be initialized Network parameter.In practice, each network parameter of the first convolutional neural networks can be carried out just with some different small random numbers Beginningization." small random number " is used to guarantee that network will not enter saturation state because weight is excessive, so as to cause failure to train, " no It is used to together " guarantee that network can normally learn.

Second step obtains training sample set.

Here, the executing subject of the first training step can be locally or remotely connected to the network from above-mentioned executing subject Other electronic equipments obtain training sample set.Wherein, each training sample may include the shooting obtained sample of unmanned plane Image and markup information corresponding with the sample image, here, markup information corresponding with the sample image may include the sample The characteristic point coordinate sequence of included unmanned plane in this image.For example, it is corresponding to obtain sample image by manually marking Markup information.It is understood that there is different unmanned planes different appearance and structure feature therefore to choose nobody When which characteristic point of the point as unmanned plane of machine, can according to the specific appearance and structure feature of unmanned plane come concrete decision, That is, the rule marked, which of sample image point can be designed in advance when manually marking the characteristic point in sample image It is only the characteristic point for needing to mark.For example, can be with the nose region of unmanned plane, wing areas, undercarriage region, tail region Deng geometric center point as the characteristic point that marks of needs.Moreover, in training characteristics point detection model, used training sample Sample image in this preferably has same or similar appearance or structure feature with UAV targets, in this way, based on upper It states the characteristic point detection model that training sample set training obtains and the obtained unmanned plane of photographic subjects unmanned plane is more readily detected out Characteristic point in image.

Third step, for the training sample that training sample is concentrated, by the sample image input initial the in the training sample One convolutional neural networks obtain sample characteristics point coordinate sequence, using preset loss function (for example, L1 norm or L2 model Number) difference between markup information in obtained sample characteristics point coordinate sequence and the training sample is calculated, and be based on The network parameter of above-mentioned initial first convolutional neural networks of resulting discrepancy adjustment is calculated, and terminates item meeting preset training In the case where part, terminate training.For example, the training termination condition here preset at can include but is not limited to: the training time is more than Preset duration;Frequency of training is more than preset times;It calculates resulting difference and is less than default discrepancy threshold.

Here it is possible to be based on calculating the resulting above-mentioned initial first convolution nerve net of discrepancy adjustment using various implementations The network parameter of network.For example, BP (Back Propagation, backpropagation) algorithm or SGD (Stochastic can be used Gradient Descent, stochastic gradient descent) algorithm adjusts the network parameters of initial first convolutional neural networks.

4th step, initial first convolutional neural networks after parameter is adjusted are determined as the first convolution trained in advance mind Through network.

Step 303, it by the second convolutional neural networks of first area image input training in advance, obtains and UAV targets Corresponding second feature point coordinate sequence.

Unmanned plane image is fully entered into the first convolutional neural networks in step 302, in order to so that inputing to The image of first convolutional neural networks covers all characteristic points in UAV targets as far as possible, but may result in input to simultaneously The image-region of first convolutional neural networks is too big, too many with the incoherent region of characteristic point of UAV targets, especially carries on the back Scape complexity is affected to characteristic point, may result in characteristic point detection result inaccuracy, and eventually affect to target without The inaccuracy of the definitive result of man-machine posture information.

For this purpose, above-mentioned executing subject can obtain first area image first.Here, first area image is step 301 In acquired unmanned plane image the first predeterminable area image.Wherein, the first predeterminable area can be pre-set use In characterization, interception includes the region of the first predetermined fraction structure of UAV targets from unmanned plane image.For example, when target without It is man-machine when including port wing, starboard wing, head, port tailplane, starboard tailplane, left undercarriage, right landing gear and intermediate undercarriage, first Predetermined fraction structure may include port wing, head, port tailplane, left undercarriage and intermediate undercarriage.

In some optional implementations of the present embodiment, it includes mesh in unmanned plane image that the first predeterminable area, which can be, Mark the port wing of unmanned plane, the region of port tailplane and undercarriage.

Then, acquired first area image can be inputted the second convolutional Neural of training in advance by above-mentioned executing subject Network obtains second feature point coordinate sequence corresponding with UAV targets.

It is understood that since first area image is the parts of images in unmanned plane image, by first area The number of characteristic point coordinate in the obtained second feature point coordinate sequence of image can be less than as obtained by unmanned plane image Fisrt feature point coordinate sequence in characteristic point coordinate number.

It should be noted that the second convolutional neural networks are used to characterize the first predeterminable area in the image including unmanned plane Corresponding relationship between image and characteristic point coordinate sequence.As an example, can be trained in advance according to following second training step Obtain the second convolutional neural networks:

The first step determines the network structure of initial second convolutional neural networks, and initial second convolutional Neural of initialization The network parameter of network.

Here, the executing subject of the second training step can be with the execution master for determining the method for UAV Attitude information Body is same or different.If identical, the executing subject of the second training step can obtain the second convolution nerve net in training The parameter value of the network structure information of trained second convolutional neural networks and network parameter is stored in local after network.If Difference, then the executing subject of the second training step can be after training obtains the second convolutional neural networks, by trained second The parameter value of the network structure information network parameter of convolutional neural networks is sent to the method for determining UAV Attitude information Executing subject.

Here, on how to the network structure of initial second convolutional neural networks of determination, and the initial volume Two of initialization The network parameter of product neural network, essentially identical with the operation of the first step in the first training step, details are not described herein.

Second step obtains training sample set.

Here, on how to obtaining training sample set, and the specific descriptions about training sample set can refer to first The associated description of second step in training step, details are not described herein.

Third step generates corresponding with the training sample each training sample that acquired training sample is concentrated First training sample, and the first training sample set is generated with each first training sample generated.Wherein, it is generated with Corresponding first training sample of the training sample includes first sample area image and first sample markup information, first sample area Area image is the image of the first predeterminable area in sample image in the training sample, and first sample markup information includes the training The characteristic point coordinate sequence of first sample area image is directed in markup information in sample.

4th step, for the first training sample that the first training sample is concentrated, by the first sample in first training sample Local area area image inputs initial second convolutional neural networks, obtains sample first area characteristic point coordinate sequence, utilization is preset Loss function (for example, L1 norm or L2 norm) calculate obtained sample first area characteristic point coordinate sequence and this first The difference between first sample markup information in training sample, and based on the resulting discrepancy adjustment above-mentioned initial second of calculating The network parameter of convolutional neural networks, and in the case where meeting preset trained termination condition, terminate training.For example, here Preset trained termination condition can include but is not limited to: the training time is more than preset duration;Frequency of training is more than preset times; It calculates resulting difference and is less than default discrepancy threshold.

Here it is possible to be based on calculating the resulting above-mentioned initial second convolution nerve net of discrepancy adjustment using various implementations The network parameter of network.For example, the network ginseng of initial second convolutional neural networks can be adjusted using BP algorithm or SGD algorithm Number.

5th step, initial second convolutional neural networks after parameter is adjusted are determined as the second convolution trained in advance mind Through network.

Step 304, according to fisrt feature point coordinate sequence and second feature point coordinate sequence, target feature point coordinate is generated Sequence.

In the present embodiment, above-mentioned executing subject can be using various implementations according to first generated in step 302 The second feature point coordinate sequence generated in characteristic point coordinate sequence and step 303 generates target corresponding with UAV targets Characteristic point coordinate sequence.

In practice, included characteristic point number of coordinates in fisrt feature point coordinate sequence and second feature point coordinate sequence Be respectively it is fixed, assume that here fisrt feature point coordinate sequence include the first preset number characteristic point coordinate, it is assumed that Second feature point coordinate sequence includes the second preset number characteristic point coordinate, moreover, the second preset number is default less than first Number.In addition, due to fisrt feature point coordinate sequence be unmanned plane image is integrally carried out characteristic point detection it is obtained, It is considered that fisrt feature point coordinate sequence includes all characteristic point coordinates of UAV targets;And second feature point coordinate sequence Be in unmanned plane image the first predeterminable area carry out characteristic point detection it is obtained, it can be considered that second feature point sit Mark sequence includes the characteristic point coordinate of the first predetermined fraction structure of UAV targets.To sum up, in second feature point coordinate sequence Each characteristic point coordinate can have incidence relation with some characteristic point coordinate in fisrt feature point coordinate sequence, that is, the Each characteristic point coordinate in two characteristic point coordinate sequences can be found and this feature point in fisrt feature point coordinate sequence Coordinate has the characteristic point coordinate of incidence relation, this has two characteristic point coordinates of incidence relation for characterizing UAV targets In same section.

Based on foregoing description, two kinds of specific implementations are given below:

The first implementation: firstly, for each characteristic point coordinate of fisrt feature point coordinate sequence, the second spy is determined With the presence or absence of the characteristic point coordinate with this feature point coordinate with incidence relation in sign point coordinate sequence, if it is determined that exist, then This feature point coordinate is updated to this feature point coordinate there is the characteristic point of incidence relation to sit in second feature point coordinate sequence Mark;If it is determined that being not present, then retain this feature point coordinate.Then, the fisrt feature point coordinate sequence after update is determined For target feature point coordinate sequence.

Second of implementation: firstly, for each characteristic point coordinate of fisrt feature point coordinate sequence, the second spy is determined With the presence or absence of the characteristic point coordinate with this feature point coordinate with incidence relation in sign point coordinate sequence, if it is determined that exist, then There is pass with this feature point coordinate in this feature point coordinate and second feature point coordinate sequence according to the first default weight coefficient The characteristic point coordinate of connection relationship is weighted, and this feature point coordinate is updated to the characteristic point coordinate that weighting obtains later;Such as Fruit determination is not present, then retains this feature point coordinate.Then, the fisrt feature point coordinate sequence after update is determined as target Characteristic point coordinate sequence.It is understood that the first default weight coefficient may include for fisrt feature point coordinate sequence here The weight coefficient of column and weight coefficient for second feature point coordinate sequence.

In some optional implementations of the present embodiment, above-mentioned executing subject can also before executing step 304, Execute following steps 303 ':

Step 303 ', by second area image input third convolutional neural networks trained in advance, obtain with target nobody The corresponding third feature point coordinate sequence of machine.

Here, above-mentioned executing subject can obtain second area image first.Here, second area image is step 301 In acquired unmanned plane image the second predeterminable area image.Wherein, the second predeterminable area can be pre-set use In characterization, interception includes the region of the second predetermined fraction structure of UAV targets from unmanned plane image.For example, when target without It is man-machine when including port wing, starboard wing, head, port tailplane, starboard tailplane, left undercarriage, right landing gear and intermediate undercarriage, second Predetermined fraction structure may include starboard wing, head, starboard tailplane, right landing gear and intermediate undercarriage.Here, the second predeterminable area It can be different from above-mentioned first predeterminable area, the second predetermined fraction structure may also be distinct from that the first predetermined fraction structure, still Second predeterminable area can partly overlap with above-mentioned first predeterminable area, and the second predetermined fraction structure can also be with the first default portion Separation structure partly overlaps.

Optionally, the second predeterminable area can be in unmanned plane image including the starboard wing of UAV targets, starboard tailplane and The region of undercarriage.

Then, acquired second area image can be inputted third convolutional Neural trained in advance by above-mentioned executing subject Network obtains third feature point coordinate sequence corresponding with UAV targets.

It is understood that since second area image is the parts of images in unmanned plane image, by second area The number of characteristic point coordinate in the obtained third feature point coordinate sequence of image can be less than as obtained by unmanned plane image Fisrt feature point coordinate sequence in characteristic point coordinate number.

It should be noted that third convolutional neural networks are used to characterize the second predeterminable area in the image including unmanned plane Corresponding relationship between image and characteristic point coordinate sequence.As an example, can be trained in advance according to following third training step Obtain third convolutional neural networks:

The first step determines the network structure of initial third convolutional neural networks, and the initial third convolutional Neural of initialization The network parameter of network.

Here, on how to the network structure of the initial third convolutional neural networks of determination, and the initial third volume of initialization The network parameter of product neural network, essentially identical with the operation of the first step in the first training step, details are not described herein.

Second step obtains training sample set.

Here, on how to obtaining training sample set, and the specific descriptions about training sample set can refer to first The associated description of second step in training step, details are not described herein.

Third step generates corresponding with the training sample each training sample that acquired training sample is concentrated Second training sample, and the second training sample set is generated with each second training sample generated.Wherein, it is generated with Corresponding second training sample of the training sample includes the second sample areas image and the second sample markup information, the second sample area Area image is the image of the second predeterminable area in sample image in the training sample, and the second sample markup information includes the training The characteristic point coordinate sequence of the second sample areas image is directed in markup information in sample.

4th step, for the second training sample that the second training sample is concentrated, by the second sample in second training sample Local area area image inputs initial third convolutional neural networks, obtains sample second area characteristic point coordinate sequence, utilization is preset Loss function (for example, L1 norm or L2 norm) calculate obtained sample second area characteristic point coordinate sequence and this second The difference between the second sample markup information in training sample, and based on the resulting above-mentioned initial third of discrepancy adjustment of calculating The network parameter of convolutional neural networks, and in the case where meeting preset trained termination condition, terminate training.For example, here Preset trained termination condition can include but is not limited to: the training time is more than preset duration;Frequency of training is more than preset times; It calculates resulting difference and is less than default discrepancy threshold.

Here it is possible to be based on calculating the resulting above-mentioned initial third convolutional Neural net of discrepancy adjustment using various implementations The network parameter of network.For example, the network ginseng of initial third convolutional neural networks can be adjusted using BP algorithm or SGD algorithm Number.

5th step, the initial third convolutional neural networks after parameter is adjusted are determined as the third convolution mind trained in advance Through network.

Based on above-mentioned optional implementation, above-mentioned executing subject can be after executing the step 303 ', as follows Execute step 304: raw according to fisrt feature point coordinate sequence, second feature point coordinate sequence and third feature point coordinate sequence At target feature point coordinate sequence.

Here, above-mentioned executing subject can be sat using various implementations according to the fisrt feature point generated in step 302 Mark sequence, the second feature point coordinate sequence generated in step 303 and step 303 ' in the third feature point coordinate sequence that generates, Generate target feature point coordinate sequence corresponding with UAV targets.

In practice, institute in fisrt feature point coordinate sequence, second feature point coordinate sequence and third feature point coordinate sequence Including characteristic point number of coordinates be respectively it is fixed, assume that here fisrt feature point coordinate sequence include the first present count Mesh characteristic point coordinate, it is assumed that second feature point coordinate sequence includes the second preset number characteristic point coordinate, it is assumed that third is special Sign point coordinate sequence includes third preset number characteristic point coordinate, moreover, the second preset number and third preset number are small In the first preset number.In addition, since fisrt feature point coordinate sequence is integrally to carry out characteristic point detection institute to unmanned plane image It obtains, it can be considered that fisrt feature point coordinate sequence includes all characteristic point coordinates of UAV targets;And second is special Sign point coordinate sequence is obtained to the first predeterminable area progress characteristic point detection in unmanned plane image, it can be considered that Second feature point coordinate sequence includes the characteristic point coordinate of the first predetermined fraction structure of UAV targets, third feature point coordinate Sequence be in unmanned plane image the second predeterminable area carry out characteristic point detection it is obtained, it can be considered that third feature Point coordinate sequence includes the characteristic point coordinate of the second predetermined fraction structure of UAV targets.To sum up, second feature point coordinate sequence Each characteristic point coordinate in column can have incidence relation with some characteristic point coordinate in fisrt feature point coordinate sequence, That is, each characteristic point coordinate in second feature point coordinate sequence can be found and the spy in fisrt feature point coordinate sequence Sign point coordinate has the characteristic point coordinate of incidence relation, this have two characteristic point coordinates of incidence relation for characterize target without Same section in man-machine;Each characteristic point coordinate in third feature point coordinate sequence can also be with fisrt feature point coordinate sequence Some characteristic point coordinate in column has incidence relation, that is, each characteristic point coordinate in third feature point coordinate sequence can To find the characteristic point coordinate for having incidence relation with this feature point coordinate in fisrt feature point coordinate sequence, this has association Two characteristic point coordinates of relationship are used to characterize the same section in UAV targets.

Based on foregoing description, two kinds of specific implementations are given below:

The first implementation: firstly, for each characteristic point coordinate of fisrt feature point coordinate sequence, the second spy is determined With the presence or absence of the characteristic point coordinate with this feature point coordinate with incidence relation in sign point coordinate sequence, if it is determined that exist, then This feature point coordinate is updated to this feature point coordinate there is the characteristic point of incidence relation to sit in second feature point coordinate sequence Mark;If it is determined that being not present, then further determining that whether there is in third feature point coordinate sequence has with this feature point coordinate The characteristic point coordinate of incidence relation, if it is determined that exist, then this feature point coordinate is updated in third feature point coordinate sequence There is the characteristic point coordinate of incidence relation with this feature point coordinate;If it is determined that in second feature point coordinate sequence and third feature There is no the characteristic point coordinate with this feature point coordinate incidence relation in point coordinate sequence, then retain this feature point coordinate.So Afterwards, the fisrt feature point coordinate sequence after update is determined as target feature point coordinate sequence.

Second of implementation: firstly, for each characteristic point coordinate of fisrt feature point coordinate sequence, the second spy is determined Whether characteristic point with this feature point coordinate incidence relation all there is in sign point coordinate sequence and in third feature point coordinate sequence Coordinate, if it is determined that all exist, then according to the second default weight coefficient in this feature point coordinate, second feature point coordinate sequence Have in the characteristic point coordinate and third feature point coordinate sequence of incidence relation with this feature point coordinate and has with this feature point coordinate Relevant characteristic point coordinate is weighted, and this feature point coordinate is updated to the characteristic point obtained after weighting and is sat Mark;If it is determined that not all existing, then further determine that in second feature point coordinate sequence or in third feature point coordinate sequence With the presence or absence of the characteristic point coordinate with this feature point coordinate incidence relation, if it is determined that exist, then preset weight system according to third Have in several pairs of this feature point coordinates, second feature point coordinate sequence or with this feature point coordinate in third feature point coordinate sequence Relevant characteristic point coordinate is weighted, and this feature point coordinate is updated to the characteristic point obtained after weighting and is sat Mark;If it is determined that in second feature point coordinate sequence and in third feature point coordinate sequence, all there is no close with this feature point coordinate The characteristic point coordinate of connection relationship then retains this feature point coordinate.Then, the fisrt feature point coordinate sequence after update is determined For target feature point coordinate sequence.It is understood that the second default weight coefficient may include for fisrt feature point here The weight coefficient of coordinate sequence, for the weight coefficient of second feature point coordinate sequence and for third feature point coordinate sequence Weight coefficient.It may include for the weight coefficient of fisrt feature point coordinate sequence and for the second spy that third, which presets weight coefficient, Levy the weight coefficient of point coordinate sequence or the weight coefficient for third feature point coordinate sequence.

Step 305, it is based on target feature point coordinate sequence and three-dimensional feature point coordinate sequence, perspective N point location is solved and asks Topic, obtains the posture information of UAV targets.

In the present embodiment, the basic phase of operation of the concrete operations of step 305 and step 204 in embodiment shown in Fig. 2 Together, details are not described herein.

From figure 3, it can be seen that being used to determine UAV Attitude in the present embodiment compared with the corresponding embodiment of Fig. 2 The process 300 of the method for information highlights combination fisrt feature point coordinate sequence and second feature point coordinate sequence obtains and target The step of unmanned plane corresponding target feature point coordinate sequence.The scheme of the present embodiment description can be examined more accurately as a result, Target feature point coordinate sequence is surveyed, the accuracy of subsequent determining UAV Attitude information is then improved.

With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, this application provides one kind for determining nothing One embodiment of the device of man-machine posture information, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, the device It specifically can be applied in various electronic equipments.

As shown in figure 4, the present embodiment is used to determine that the device 400 of UAV Attitude information includes: first acquisition unit 401, input unit 402, second acquisition unit 403 and solution unit 404.Wherein, first acquisition unit 401 are configured to obtain Take the obtained unmanned plane image of photographic subjects unmanned plane;Input unit 402 is configured to input above-mentioned unmanned plane image pre- First trained characteristic point detection model, obtains target feature point coordinate sequence corresponding with above-mentioned UAV targets, wherein above-mentioned Characteristic point detection model is used to characterize the corresponding relationship between image and characteristic point coordinate sequence including unmanned plane;Second obtains Unit 403 is configured to obtain the three-dimensional feature point coordinate sequence of the threedimensional model for characterizing above-mentioned UAV targets;It solves Unit 404 is configured to solve perspective N based on above-mentioned target feature point coordinate sequence and above-mentioned three-dimensional feature point coordinate sequence Point location problem obtains the posture information of above-mentioned UAV targets.

In the present embodiment, for determining first acquisition unit 401, the input list of the device 400 of UAV Attitude information Member 402, the specific processing of second acquisition unit 403 and solution unit 404 and its brought technical effect can refer to Fig. 2 respectively The related description of step 201, step 202, step 203 and step 204 in corresponding embodiment, details are not described herein.

In some optional implementations of the present embodiment, features described above point detection model may include the first convolution mind Through network and the second convolutional neural networks;And above-mentioned input unit 402 may include: the first input module 4021, be configured At by above-mentioned first convolutional neural networks of above-mentioned unmanned plane image input training in advance, obtain corresponding with above-mentioned UAV targets Fisrt feature point coordinate sequence;Second input module 4022 is configured to inputting first area image into the upper of training in advance The second convolutional neural networks are stated, obtain second feature point coordinate sequence corresponding with above-mentioned UAV targets, wherein above-mentioned first Area image is the image of the first predeterminable area of above-mentioned unmanned plane image;Generation module 4023 is configured to according to above-mentioned One characteristic point coordinate sequence and above-mentioned second feature point coordinate sequence, generate above-mentioned target feature point coordinate sequence.

In some optional implementations of the present embodiment, features described above point detection model can also include third convolution Neural network;And above-mentioned input unit 402 can also include: third input module 4024, be configured to according to above-mentioned One characteristic point coordinate sequence and above-mentioned second feature point coordinate sequence, before generating above-mentioned target feature point coordinate sequence, by The above-mentioned third convolutional neural networks of two area images input training in advance, it is special to obtain third corresponding with above-mentioned UAV targets Sign point coordinate sequence, wherein above-mentioned second area image is the image of the second predeterminable area of above-mentioned unmanned plane image;On and Stating generation module 4023 can be further configured to: be sat according to above-mentioned fisrt feature point coordinate sequence, above-mentioned second feature point Sequence and above-mentioned third feature point coordinate sequence are marked, above-mentioned target feature point coordinate sequence is generated.

In some optional implementations of the present embodiment, above-mentioned first predeterminable area can be above-mentioned unmanned plane image In include the port wing of above-mentioned UAV targets, port tailplane and undercarriage region.

In some optional implementations of the present embodiment, above-mentioned second predeterminable area can be above-mentioned unmanned plane image In include the starboard wing of above-mentioned UAV targets, starboard tailplane and undercarriage region.

In some optional implementations of the present embodiment, features described above point detection model may include Volume Four product mind Through network;And above-mentioned input unit 402 can be further configured to: above-mentioned unmanned plane image is inputted above-mentioned Volume Four product Neural network obtains target feature point coordinate sequence corresponding with above-mentioned UAV targets.

It should be noted that each unit in the device provided by the embodiments of the present application for determining UAV Attitude information Realize that details and technical effect can be with reference to the explanations of other embodiments in the application, details are not described herein.

Below with reference to Fig. 5, it illustrates the computer systems 500 for the electronic equipment for being suitable for being used to realize the embodiment of the present application Structural schematic diagram.Electronic equipment shown in Fig. 5 is only an example, function to the embodiment of the present application and should not use model Shroud carrys out any restrictions.

As shown in figure 5, computer system 500 includes central processing unit (CPU, Central Processing Unit) 501, it can be according to the program being stored in read-only memory (ROM, Read Only Memory) 502 or from storage section 508 programs being loaded into random access storage device (RAM, Random Access Memory) 503 and execute various appropriate Movement and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data.CPU 501,ROM 502 and RAM 503 is connected with each other by bus 504.Input/output (I/O, Input/Output) interface 505 is also connected to Bus 504.

I/O interface 505 is connected to lower component: the importation 506 including keyboard, mouse etc.;It is penetrated including such as cathode Spool (CRT, Cathode Ray Tube), liquid crystal display (LCD, Liquid Crystal Display) etc. and loudspeaker Deng output par, c 507;Storage section 508 including hard disk etc.;And including such as LAN (local area network, Local Area Network) the communications portion 509 of the network interface card of card, modem etc..Communications portion 509 is via such as internet Network executes communication process.Driver 510 is also connected to I/O interface 505 as needed.Detachable media 511, such as disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 510, in order to from the calculating read thereon Machine program is mounted into storage section 508 as needed.

Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 509, and/or from detachable media 511 are mounted.When the computer program is executed by central processing unit (CPU) 501, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination. The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection, Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang Any appropriate combination stated.

Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.

Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet It includes first acquisition unit, input unit, second acquisition unit and solves unit.Wherein, the title of these units is in certain situation Under do not constitute restriction to the unit itself, for example, first acquisition unit be also described as " obtain photographic subjects nobody The unit of the obtained unmanned plane image of machine ".

As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should Device: the obtained unmanned plane image of photographic subjects unmanned plane is obtained;By unmanned plane image input characteristic point inspection trained in advance Model is surveyed, target feature point coordinate sequence corresponding with UAV targets is obtained, wherein characteristic point detection model is for characterizing packet Include the corresponding relationship between the image of unmanned plane and characteristic point coordinate sequence;Obtain the threedimensional model for characterizing UAV targets Three-dimensional feature point coordinate sequence;Based on target feature point coordinate sequence and three-dimensional feature point coordinate sequence, it is fixed to solve perspective N point Position problem, obtains the posture information of UAV targets.

Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于卷积神经网络的晶状体浑浊程度检测方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!