Depth map processing method and device and unmanned aerial vehicle

文档序号:1950484 发布日期:2021-12-10 浏览:14次 中文

阅读说明:本技术 一种深度图处理方法、装置和无人机 (Depth map processing method and device and unmanned aerial vehicle ) 是由 李昭早 于 2018-12-29 设计创作,主要内容包括:本发明实施例涉及一种深度图处理方法、装置和无人机,所述深度图处理方法包括:S1、对图像采集装置采集的目标区域的图像进行校正;S2、对图像进行双目匹配,以获得所述目标区域的深度图;S3、根据所述深度图,获取所述无人机周围的障碍物分布;该方法还包括:在执行以上各步骤之前,获取各步骤的执行时间;根据所述各步骤的执行时间,建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述各个步骤,以降低总的执行时间。由所述至少两个线程分别执行所述深度图处理的各个步骤,各个线程可以通过至少一个环形队列获得其他线程的处理结果。通过加入环形队列,多线程并行运行的方式,解决了图像处理阻塞的问题、降低了延时。(The embodiment of the invention relates to a depth map processing method, a depth map processing device and an unmanned aerial vehicle, wherein the depth map processing method comprises the following steps: s1, correcting the image of the target area acquired by the image acquisition device; s2, carrying out binocular matching on the images to obtain a depth map of the target area; s3, acquiring the distribution of obstacles around the unmanned aerial vehicle according to the depth map; the method further comprises the following steps: before executing the steps, acquiring the execution time of each step; and establishing at least two threads and at least one circular queue according to the execution time of each step, and executing each step by the at least two threads respectively so as to reduce the total execution time. And executing each step of the depth map processing by the at least two threads respectively, wherein each thread can obtain the processing results of other threads through at least one ring queue. By adding the annular queue and running in parallel in multiple threads, the problem of image processing blockage is solved, and the time delay is reduced.)

1. A depth map processing method for a controller of a drone, the drone further comprising an image acquisition device communicatively connected to the controller, the method comprising:

s1, correcting the image of the target area acquired by the image acquisition device;

s2, carrying out binocular matching on the images to obtain a depth map of the target area;

s3, acquiring the distribution of obstacles around the unmanned aerial vehicle according to the depth map;

the method further comprises the following steps:

acquiring the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3 before the step S1, the step S2 and the step S3 are executed;

judging whether the sum of the execution times of the step S1, the step S2 and the step S3 meets a first preset condition, wherein the first preset condition is that: the sum of the execution times of the step S1, the step S2 and the step S3 is greater than a preset value;

if yes, establishing a first thread, a second thread and a first ring queue, wherein the first thread and the second thread respectively execute two steps of the step S1, the step S2 and the step S3, and the second thread executes one step of the step S1, the step S2 and the step S3;

judging whether the sum of the execution time of the two steps executed by the first thread meets a second preset condition, wherein the second preset condition is that the sum of the execution time of the two steps executed by the first thread is greater than a preset value;

if so, establishing a third thread and a second ring queue, and executing the two steps by the first thread and the third thread respectively; and in the two threads executing the adjacent steps, the processing result of the thread executing the previous step is sent to the annular queue, and the thread executing the next step takes out the processing result from the annular queue and executes the next step according to the processing result.

2. The method of claim 1, wherein the predetermined value is 1000/P, where P is an image frame rate.

3. The method of claim 1, wherein the controller comprises a hardware acceleration channel, the image capture device comprises at least two sets of binocular elements;

then, the performing binocular matching on the image to obtain the depth map of the target area includes:

and sending the images collected by the at least two groups of binocular units to the hardware acceleration channel, and performing binocular matching on the images collected by the at least two groups of binocular units through time division multiplexing of the hardware acceleration channel to obtain the depth map.

4. The method according to any one of claims 1-3, wherein the number of hardware acceleration channels is at least two, and images acquired by the at least two groups of binocular units comprise an image group with a resolution of a first resolution and an image group with a resolution of a second resolution, wherein the second resolution is greater than the first resolution;

then, the performing binocular matching on the image to obtain the depth map of the target area includes:

sending the image group with the second resolution to one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group;

and sending the image group with the resolution of the first resolution to another one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group.

5. The method according to claim 4, wherein the number of the hardware acceleration channels is 4, which are respectively a first hardware acceleration channel, a second hardware acceleration channel, a third hardware acceleration channel and a fourth hardware acceleration channel;

the image acquisition device comprises 6 groups of binocular units, wherein in the 6 groups of binocular units, 4 groups of images acquired by the binocular units are images of the first resolution, and 2 groups of images acquired by the binocular units are images of the second resolution;

then, the sending the image group with the resolution being the first resolution to another one of the at least two hardware acceleration channels for binocular matching to obtain a depth map corresponding to the image group includes:

sending the image groups collected by the 2 groups of binocular units to the first hardware acceleration channel and the second hardware acceleration channel respectively;

the sending the image with the second resolution to one of the at least two hardware acceleration channels for binocular matching to obtain a depth map corresponding to the image with the second resolution includes:

and sending the image groups collected by 2 groups of binocular units in the 4 groups of binocular units to the third hardware acceleration channel, and sending the image groups collected by the remaining 2 groups of binocular units in the 4 groups of binocular units to the fourth hardware acceleration channel.

6. The utility model provides a depth map processing apparatus for unmanned aerial vehicle's controller, unmanned aerial vehicle still includes image acquisition device, image acquisition device with controller communication connection, its characterized in that, the device includes:

an image correction module, configured to perform step S1, correcting the image of the target area acquired by the image acquisition device;

a depth map obtaining module, configured to perform step S2, perform binocular matching on the image, so as to obtain a depth map of the target area;

the obstacle distribution obtaining module is used for executing the step S3 and obtaining the obstacle distribution around the unmanned aerial vehicle according to the depth map;

the device also includes:

a time acquisition module for acquiring an execution time of the step S1, an execution time of the step S2 and an execution time of the step S3 before the step S1, the step S2 and the step S are executed;

a thread and ring queue creation module to:

judging whether the sum of the execution times of the step S1, the step S2 and the step S3 meets a first preset condition, wherein the first preset condition is that: the sum of the execution times of the step S1, the step S2 and the step S3 is greater than a preset value;

if yes, establishing a first thread, a second thread and a first ring queue, wherein the first thread and the second thread respectively execute two steps of the step S1, the step S2 and the step S3, and the second thread executes one step of the step S1, the step S2 and the step S3;

judging whether the sum of the execution time of the two steps executed by the first thread meets a second preset condition, wherein the second preset condition is that the sum of the execution time of the two steps executed by the first thread is greater than a preset value;

if so, establishing a third thread and a second ring queue, and executing the two steps by the first thread and the third thread respectively; (ii) a

And in the two threads executing the adjacent steps, the processing result of the thread executing the previous step is sent to the annular queue, and the thread executing the next step takes out the processing result from the annular queue and executes the next step according to the processing result.

7. The apparatus of claim 6, wherein the predetermined value is 1000/P, where P is an image frame rate.

8. The apparatus of claim 6, wherein the controller comprises a hardware acceleration channel, and the image capture device comprises at least two sets of binocular units;

then, the depth map acquisition module is specifically configured to:

and sending the images collected by the at least two groups of binocular units to the hardware acceleration channel, and performing binocular matching on the images collected by the at least two groups of binocular units through time division multiplexing of the hardware acceleration channel to obtain the depth map.

9. The apparatus according to any one of claims 6-8, wherein the number of hardware acceleration channels is at least two, and images acquired by the at least two groups of binocular units include an image group with a resolution of a first resolution and an image group with a resolution of a second resolution, wherein the second resolution is greater than the first resolution;

then, the depth map acquisition module is specifically configured to:

sending the image group with the second resolution to one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group;

and sending the image group with the resolution of the first resolution to another one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group.

10. The apparatus according to claim 9, wherein the number of the hardware acceleration channels is 4, which are a first hardware acceleration channel, a second hardware acceleration channel, a third hardware acceleration channel and a fourth hardware acceleration channel;

the image acquisition device comprises 6 groups of binocular units, wherein in the 6 groups of binocular units, 4 groups of images acquired by the binocular units are images of the first resolution, and 2 groups of images acquired by the binocular units are images of the second resolution;

then, the depth map acquisition module is specifically configured to:

sending the image groups collected by the 2 groups of binocular units to the first hardware acceleration channel and the second hardware acceleration channel respectively;

and sending the image groups collected by 2 groups of binocular units in the 4 groups of binocular units to the third hardware acceleration channel, and sending the image groups collected by the remaining 2 groups of binocular units in the 4 groups of binocular units to the fourth hardware acceleration channel.

11. A drone, characterized in that it comprises:

a body;

the machine arm is connected with the machine body;

the power device is arranged on the machine arm;

the image acquisition device is arranged on the machine body and used for acquiring a target image of the target area of the unmanned aerial vehicle;

the visual chip is arranged on the machine body and is in communication connection with the image acquisition device;

the vision chip includes:

at least one processor, and

a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method of any of claims 1-5.

12. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a drone, cause the drone to perform the method of any one of claims 1-5.

Technical Field

The embodiment of the invention relates to the technical field of unmanned aerial vehicles, in particular to a depth map processing method and device and an unmanned aerial vehicle.

Background

The unmanned aerial vehicle is in the autonomous flight in-process, will avoid the barrier flight, consequently need detect the position of barrier to make unmanned aerial vehicle can take according to the position of barrier and avoid the barrier measure. At present, most of unmanned aerial vehicles adopt a vision system (such as a monocular vision system, a binocular vision system, etc.) to detect the position of an obstacle, and an image of the surrounding area of the unmanned aerial vehicle is captured by using an image capturing device and processed to determine the position information of the surrounding obstacle. Then, the unmanned aerial vehicle takes obstacle avoidance measures such as detour, deceleration or suspension according to the speed and the attitude of the unmanned aerial vehicle and the position information of the obstacle to avoid the obstacle.

In the process of implementing the invention, the inventor finds that at least the following problems exist in the related art: when the unmanned aerial vehicle performs image processing and performs obstacle avoidance measures according to processing results, if the frame rate of image acquisition is high, image processing is easy to block, and therefore delay is large.

Disclosure of Invention

The embodiment of the invention aims to provide a depth map processing method and device and an unmanned aerial vehicle, which can solve the problems of image processing blockage and large delay in the process of utilizing a visual system by the unmanned aerial vehicle.

In a first aspect, an embodiment of the present invention provides a depth map processing method, which is used for a controller of an unmanned aerial vehicle, where the unmanned aerial vehicle further includes an image acquisition device, the image acquisition device is in communication connection with the controller, and the method includes:

s1, correcting the image of the target area acquired by the image acquisition device;

s2, carrying out binocular matching on the images to obtain a depth map of the target area;

s3, acquiring the distribution of obstacles around the unmanned aerial vehicle according to the depth map;

the method further comprises the following steps:

acquiring the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3 before the step S1, the step S2 and the step S3 are executed;

establishing at least two threads and at least one circular queue according to the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3, wherein the step S1, the step S2 and the step S3 are respectively executed by the at least two threads to reduce the total execution time;

and in the two threads executing the adjacent steps, the processing result of the thread executing the previous step is sent to the annular queue, and the thread executing the next step takes out the processing result from the annular queue and executes the next step according to the processing result.

In some embodiments, the establishing at least two threads and at least one circular queue according to the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3, the step S1, the step S2 and the step S3 being executed by the at least two threads respectively to reduce the total execution time includes:

judging whether the sum of the execution times of the step S1, the step S2 and the step S3 meets a first preset condition;

if yes, establishing a first thread, a second thread and a first ring queue, and executing the step S1, the step S2 and the step S3 by the first thread and the second thread respectively;

and in the first thread and the second thread, the processing result of the thread executing the previous step is sent to the first ring queue, and the thread executing the next step takes out the processing result from the first ring queue and executes the next step according to the processing result.

In some embodiments, the first preset condition is:

the sum of the execution times of the step S1, the step S2, and the step S3 is greater than a preset value.

In some embodiments, the predetermined value is 1000/P, where P is the image frame rate.

In some embodiments, the first thread performs two of the steps S1, S2, and S3, and the second thread performs one of the steps S1, S2, and S3;

if the sum of the execution times of the step S1, the step S2, and the step S3 satisfies a first preset condition, a first thread, a second thread, and a first ring queue are established, and the step S1, the step S2, and the step S3 are respectively executed by the first thread and the second thread, which includes:

judging whether the sum of the execution time of the two steps executed by the first thread meets a second preset condition or not;

if so, establishing a third thread and a second ring queue, and executing the two steps by the first thread and the third thread respectively;

and in the first thread and the third thread, the processing result of the thread executing the previous step is sent to the second ring queue, and the thread executing the next step takes out the processing result from the second ring queue and executes the next step according to the processing result.

In some embodiments, the second preset condition is that the sum of the execution times of the two steps executed by the first thread is greater than a preset value.

In some embodiments, the predetermined value is 1000/P, where P is the image frame rate.

In some embodiments, the controller comprises a hardware acceleration channel, and the image capture device comprises at least two sets of binocular units;

then, the performing binocular matching on the image to obtain the depth map of the target area includes:

and sending the images collected by the at least two groups of binocular units to the hardware acceleration channel, and performing binocular matching on the images collected by the at least two groups of binocular units through time division multiplexing of the hardware acceleration channel to obtain the depth map.

In some embodiments, the number of the hardware acceleration channels is at least two, and the images acquired by the at least two groups of binocular units comprise image groups with a first resolution and image groups with a second resolution, wherein the second resolution is greater than the first resolution;

then, the performing binocular matching on the image to obtain the depth map of the target area includes:

sending the image group with the second resolution to one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group;

and sending the image group with the resolution of the first resolution to another one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group.

In some embodiments, the number of the hardware acceleration channels is 4, which are respectively a first hardware acceleration channel, a second hardware acceleration channel, a third hardware acceleration channel and a fourth hardware acceleration channel;

the image acquisition device comprises 6 groups of binocular units, wherein in the 6 groups of binocular units, 4 groups of images acquired by the binocular units are images of the first resolution, and 2 groups of images acquired by the binocular units are images of the second resolution;

then, the sending the image group with the resolution being the first resolution to another one of the at least two hardware acceleration channels for binocular matching to obtain a depth map corresponding to the image group includes:

sending the image groups collected by the 2 groups of binocular units to the first hardware acceleration channel and the second hardware acceleration channel respectively;

the sending the image with the second resolution to one of the at least two hardware acceleration channels for binocular matching to obtain a depth map corresponding to the image with the second resolution includes:

and sending the image groups collected by 2 groups of binocular units in the 4 groups of binocular units to the third hardware acceleration channel, and sending the image groups collected by the remaining 2 groups of binocular units in the 4 groups of binocular units to the fourth hardware acceleration channel.

In a second aspect, an embodiment of the present invention provides a depth map processing apparatus, which is used for a controller of an unmanned aerial vehicle, where the unmanned aerial vehicle further includes an image acquisition device, the image acquisition device is in communication connection with the controller, and the apparatus includes:

an image correction module, configured to perform step S1, correcting the image of the target area acquired by the image acquisition device;

a depth map obtaining module, configured to perform step S2, perform binocular matching on the image, so as to obtain a depth map of the target area;

the obstacle distribution obtaining module is used for executing the step S3 and obtaining the obstacle distribution around the unmanned aerial vehicle according to the depth map;

the device also includes:

a time acquisition module for acquiring an execution time of the step S1, an execution time of the step S2 and an execution time of the step S3 before the step S1, the step S2 and the step S are executed;

a thread and ring queue establishing module, configured to establish at least two threads and at least one ring queue according to the execution time of the step S1, the execution time of the step S2, and the execution time of the step S3, and the at least two threads execute the step S1, the step S2, and the step S3, respectively, so as to reduce a total execution time;

and in the two threads executing the adjacent steps, the processing result of the thread executing the previous step is sent to the annular queue, and the thread executing the next step takes out the processing result from the annular queue and executes the next step according to the processing result.

In some embodiments, the thread and ring queue establishing module comprises:

a determining submodule, configured to determine whether a sum of execution times of the step S1, the step S2, and the step S3 satisfies a first preset condition;

a thread and ring queue establishing sub-module, configured to establish a first thread, a second thread, and a first ring queue if a sum of execution times of the step S1, the step S2, and the step S3 meets a first preset condition, and the first thread and the second thread execute the step S1, the step S2, and the step S3, respectively;

and in the first thread and the second thread, the processing result of the thread executing the previous step is sent to the first ring queue, and the thread executing the next step takes out the processing result from the first ring queue and executes the next step according to the processing result.

In some embodiments, the first preset condition is:

the sum of the execution times of the step S1, the step S2, and the step S3 is greater than a preset value.

In some embodiments, the predetermined value is 1000/P, where P is the image frame rate.

In some embodiments, the first thread performs two of the steps S1, S2, and S3, and the second thread performs one of the steps S1, S2, and S3;

the thread and circular queue establishing submodule is specifically configured to:

judging whether the sum of the execution time of the two steps executed by the first thread meets a second preset condition or not;

if so, establishing a third thread and a second ring queue, and executing the two steps by the first thread and the third thread respectively;

and in the first thread and the third thread, the processing result of the thread executing the previous step is sent to the second ring queue, and the thread executing the next step takes out the processing result from the second ring queue and executes the next step according to the processing result.

In some embodiments, the second preset condition is that the sum of the execution times of the two steps executed by the first thread is greater than a preset value.

In some embodiments, the predetermined value is 1000/P, where P is the image frame rate.

In some embodiments, the controller comprises a hardware acceleration channel, and the image capture device comprises at least two sets of binocular units;

then, the depth map acquisition module is specifically configured to:

and sending the images collected by the at least two groups of binocular units to the hardware acceleration channel, and performing binocular matching on the images collected by the at least two groups of binocular units through time division multiplexing of the hardware acceleration channel to obtain the depth map.

In some embodiments, the number of the hardware acceleration channels is at least two, and the images acquired by the at least two groups of binocular units comprise image groups with a first resolution and image groups with a second resolution, wherein the second resolution is greater than the first resolution;

then, the depth map acquisition module is specifically configured to:

sending the image group with the second resolution to one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group;

and sending the image group with the resolution of the first resolution to another one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group.

In some embodiments, the number of the hardware acceleration channels is 4, which are respectively a first hardware acceleration channel, a second hardware acceleration channel, a third hardware acceleration channel and a fourth hardware acceleration channel;

the image acquisition device comprises 6 groups of binocular units, wherein in the 6 groups of binocular units, 4 groups of images acquired by the binocular units are images of the first resolution, and 2 groups of images acquired by the binocular units are images of the second resolution;

then, the depth map acquisition module is specifically configured to:

sending the image groups collected by the 2 groups of binocular units to the first hardware acceleration channel and the second hardware acceleration channel respectively;

and sending the image groups collected by 2 groups of binocular units in the 4 groups of binocular units to the third hardware acceleration channel, and sending the image groups collected by the remaining 2 groups of binocular units in the 4 groups of binocular units to the fourth hardware acceleration channel.

In a third aspect, an embodiment of the present invention provides an unmanned aerial vehicle, where the unmanned aerial vehicle includes:

a body;

the machine arm is connected with the machine body;

the power device is arranged on the machine arm;

the image acquisition device is arranged on the machine body and used for acquiring a target image of the target area of the unmanned aerial vehicle;

the visual chip is arranged on the machine body and is in communication connection with the image acquisition device;

the vision chip includes:

at least one processor, and

a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method described above.

In a fourth aspect, embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a drone, cause the drone to perform the method described above.

According to the depth map processing method and device and the unmanned aerial vehicle, at least two threads and at least one annular queue are established according to the execution time of each step in the depth map processing process of the unmanned aerial vehicle controller, and each step of the depth map processing is executed by the at least two threads respectively. Each thread can obtain the processing results of other threads through at least one ring queue. By adding the annular queue and running in parallel in multiple threads, the problem of image processing blockage is solved, and the time delay is reduced.

Drawings

One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.

FIG. 1 is a schematic diagram of an application scenario of a depth map processing method and apparatus according to an embodiment of the present invention;

fig. 2 is a schematic diagram of a hardware structure of an embodiment of the drone of the present invention;

FIG. 3 is a schematic flow chart diagram illustrating a depth map processing method according to an embodiment of the present invention;

FIG. 4 is a diagram of a depth map processing method according to an embodiment of the present invention, in which a hardware acceleration channel is applied;

FIG. 5 is a schematic structural diagram of one embodiment of a depth map processing apparatus of the present invention;

FIG. 6 is a schematic structural diagram of one embodiment of a depth map processing apparatus of the present invention;

fig. 7 is a schematic diagram of a hardware structure of a vision chip in an embodiment of the drone of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The depth map processing method and device and the unmanned aerial vehicle provided by the embodiment of the invention can be suitable for application scenes as shown in FIG. 1. The application scenario includes the drone 100 and the obstacle 200, wherein the drone 100 may be a suitable unmanned aerial vehicle, including fixed wing unmanned aerial vehicles and rotary wing unmanned aerial vehicles, such as helicopters, quadrotors, and aircraft with other numbers of rotors and/or rotor configurations. The drone 100 may also be other movable objects such as manned vehicles, aeromodelling, unmanned airships, unmanned hot air balloons, and the like. An obstacle 200 such as a building, mountain, tree, forest, signal tower, or other movable or immovable object (only one obstacle is shown in fig. 1, and there may be more obstacles or no obstacles in practice).

In some embodiments, referring to fig. 2 (fig. 2 only partially shows the structure of the drone 100), the drone 100 includes a body 10, a horn connected to the body 10, a power device, and a control system provided in the body 10. The power device is used for providing thrust and lift for the unmanned aerial vehicle 10 to fly. The control system is the central nerve of the drone 100 and may include a number of functional units, such as flight control systems, vision systems, and other systems with specific functions. The vision system comprises an image acquisition device 30, a vision chip 20 and the like, and the flight control system comprises various sensors (such as a gyroscope and an accelerometer) and a flight control chip.

The drone 100 needs to identify and avoid the obstacle 200 in front of the flight during autonomous flight. The unmanned aerial vehicle 100 can detect the position information of the obstacles around the unmanned aerial vehicle 100 through the image acquisition device 30 and the vision chip 20, and take obstacle avoidance measures according to the position information. The image capturing device 30 is used to obtain a target image of a target area, and may be a high-definition camera or a motion camera, for example. The vision chip 20 is in communication connection with the image acquisition device 30, and can acquire a target image acquired by the image acquisition device 30, perform image processing on the target image, and acquire depth information of a region corresponding to the target image, so as to acquire position information of the obstacle 200 around the unmanned aerial vehicle 100. According to the position information of the obstacle 200, the vision chip 20 can take obstacle avoidance measures. And the flight control chip controls the unmanned aerial vehicle 100 according to the obstacle avoidance measures. The obstacle avoidance measures comprise controlling the unmanned aerial vehicle to decelerate or pause and the like. Based on the position information of the obstacle 200, the vision chip 20 can determine the distance of an obstacle and perform three-dimensional reconstruction.

The image capturing device 30 may include at least one monocular unit or at least one binocular unit (hereinafter, the binocular unit is exemplified). Each binocular unit can obtain a group of target images, and each binocular unit collects the target images at a preset frame rate. The vision chip 20 needs to perform image processing on the target image groups obtained by each binocular unit to obtain depth information corresponding to each target image group, and the vision chip 20 needs to obtain the distribution of the obstacles 200 around the unmanned aerial vehicle 100 according to the depth information corresponding to each target image group. When the frame rate of the target image acquisition is high, the situation that the vision chip 20 cannot process the target image is easy to occur, and the processing of the target image group of the input source is blocked, so that the image processing is blocked, and the delay is large.

In order to solve the problems of image processing blocking and large delay, according to the execution time of each step in the process of processing the depth map by the unmanned aerial vehicle controller, when the sum of the execution times of the steps is large, at least two threads and at least one circular queue can be established, and the at least two threads are used for executing each step of processing the depth map respectively. Each thread can obtain the processing results of other threads through at least one ring queue. By adding the annular queue and running in parallel in multiple threads, the problem of image processing blockage is solved, and the time delay is reduced.

In the above embodiments, the unmanned aerial vehicle 100 is provided with the vision chip 20 to obtain the obstacle avoidance measure of the unmanned aerial vehicle 100 according to the image acquired by the image acquisition device 30, and in other embodiments, the unmanned aerial vehicle 100 may also use other controllers to implement the function of the vision chip 20.

Fig. 3 is a schematic flow chart of a depth map processing method provided in an embodiment of the present invention, the method is used for a controller of the drone 100 shown in fig. 1 or fig. 2, in some embodiments, the controller may be a vision chip 20 of the drone 100, as shown in fig. 3, and the method includes:

and S1, correcting the image of the target area acquired by the image acquisition device.

The image acquisition device can be a binocular unit, and the correction of the image acquired by the image acquisition device comprises image correction of the image and calibration of each group of images acquired by the binocular unit so as to acquire calibration parameters corresponding to each group of images.

And S2, carrying out binocular matching on the image to obtain a depth map of the target area.

And acquiring the depth information of the area corresponding to each group of images according to the disparity map and the calibration parameters.

S3, obtaining the distribution of obstacles around the unmanned aerial vehicle according to the depth map.

And performing data processing according to the depth map to obtain the distribution of obstacles around the unmanned aerial vehicle, wherein the distribution of obstacles includes, for example, position information of each obstacle, a distance of each obstacle, a three-dimensional map of the environment around the unmanned aerial vehicle, and the like.

S4, before executing the above steps, obtaining the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3.

The execution time of each step may be set in advance by a designer based on basic knowledge of the processing time of each step. The step S1, the step S2 and the step S3 may be tried in the controller for a while, and the controller may detect the execution time of each step to obtain the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3.

S5, establishing at least two threads and at least one circular queue according to the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3, and executing the step S1, the step S2 and the step S3 by the at least two threads respectively to reduce the total execution time; and in the two threads executing the adjacent steps, the processing result of the thread executing the previous step is sent to the annular queue, and the thread executing the next step takes out the processing result from the annular queue and executes the next step according to the processing result.

It is determined whether to execute step S1, step S2, and step S3 in parallel using at least two threads, according to the obtained execution time of step S1, the execution time of step S2, and the execution time of step S3. The number of threads and the number of circular queues may be determined according to the execution time of each step, and may be two threads and one circular queue, or three threads and two circular queues, for example.

In some embodiments, if the total execution time of step S1, step S2 and step S3 is long (e.g., greater than 1000/P, where P is the image frame rate at which the image capturing device captures the image), image blocking may occur, and in order to avoid image blocking, at least two threads may be established to execute step S1, step S2 and step S3 in parallel. If two adjacent steps are executed by different threads respectively, the thread executing the previous step can send the processing result to the ring queue, and the thread executing the next step acquires the processing result from the ring queue.

For example, a first thread, a second thread, and a first ring queue may be provided, two of the steps S1, S2, and S3 are performed by the first thread, and one of the steps S1, S2, and S3 is performed by the second thread. Alternatively, two of the steps S1, S2 and S3 are performed by the second thread, and one of the steps S1, S2 and S3 is performed by the first thread.

For example, the first thread performs steps S1 and S2, and the second thread performs step S3. The first thread sends the processing result of step S2 to the first ring queue, and the second thread obtains the processing result from the first ring queue and performs step S3. In other embodiments, step S1 may also be performed by the first thread, step S2 and step S3 may be performed by the second thread, and so on.

Assuming that the execution time of step S1 is t1, the execution time of step S2 is t2, and the execution time of step S3 is t3, if t1+ t2+ t3>1000/P, t1+ t2<1000/P, and t3<1000/P, the first thread executes step S1 and step S2, and the second thread executes step S3, and after the two threads execute step S1, step S2, and step S3 in parallel, the total execution time max (t1+ t2, t3) <1000/P reduces the total execution time, effectively avoiding image blocking.

In some embodiments, if a thread executes two adjacent steps, and the total execution time of the two steps is still large (for example, greater than 1000/P), the two steps may also be executed by two threads respectively, in order to further avoid image blocking.

Taking the first thread executing step S1 and step S2, the second thread executing step S3 as an example, if t1+ t2>1000/P, t3<1000/P, the third thread and the second ring queue may be re-established, the first thread executes step S1, the third thread executes step S2, the first thread sends the processing result of step S1 to the second ring queue, and the third thread fetches the processing result from the second ring queue and executes step S2. In other embodiments, step S1 may be executed by the third thread, and step S2 may be executed by the first thread.

In other embodiments, if the execution time of any one of the steps S1, S2, and S3 is long, for example, t3>1000/P, two or more threads may be further used to execute step S3.

In practical applications, multiple sets of binocular units are usually used for depth detection, and image binocular matching takes longer time. The hardware acceleration channel is a device which consists of hardware and interface software and can improve the running speed of the software.

In some embodiments, a hardware acceleration channel may be used to increase the speed of operation. And sequentially sending each frame image group continuously shot by at least two groups of binocular units to the hardware acceleration channel, and carrying out image processing on each group of images through time division multiplexing of the hardware acceleration channel to obtain depth information corresponding to each group of images. The first group of images are sent to a hardware acceleration channel for processing, and then sent to the second group of images … after the processing is finished, and so on, and the hardware acceleration channel is multiplexed in a polling way.

In other embodiments, at least two hardware acceleration channels may be utilized to increase the speed of operation. In the image group obtained by each binocular unit, the image group obtained by the binocular unit with high resolution and the image group obtained by the binocular unit with low resolution are respectively subjected to binocular matching processing by using a hardware acceleration channel, so that the running speed is further improved.

In other embodiments, the image group obtained by one binocular unit with high resolution may be processed by using one hardware acceleration channel, and the image group obtained by at least two binocular units with low resolution (for example, two binocular units or three binocular units) may share one hardware acceleration channel for processing. The image groups obtained by at least two binocular units with low resolution share one hardware acceleration channel, so that the hardware acceleration channel can be fully and reasonably utilized on the basis of not influencing the software running speed under the condition of small number of the hardware acceleration channels. On the occasion that the image groups obtained by at least two binocular units share one hardware acceleration channel, each target image group can be processed by a method of time division multiplexing the hardware acceleration channel.

For example, in one implementation of the drone that includes 2 pairs of binocular units with resolution 720P and 4 pairs of binocular units with resolution VGA, there are 4 hardware acceleration channels in the controller. In specific implementation, as shown in fig. 4, the 720P binocular unit with high resolution may use one hardware acceleration channel alone, and each two binocular units in the VGA binocular unit with low resolution share one hardware acceleration channel. The corresponding relation between the binocular units and the hardware acceleration channels can be set in advance, so that the image group obtained by each binocular unit can be sent to the corresponding hardware channel for processing.

Accordingly, as shown in fig. 5, an embodiment of the present invention further provides a depth map processing apparatus, which is used for the controller of the drone 100 shown in fig. 1 or fig. 2, in some embodiments, the controller may be the vision chip 20 of the drone 100, as shown in fig. 5, and the depth map processing apparatus 500 includes:

an image correction module 501, configured to perform step S1, to correct the image of the target area acquired by the image acquisition apparatus;

a depth map obtaining module 502, configured to perform step S2, perform binocular matching on the image, so as to obtain a depth map of the target area;

an obstacle distribution obtaining module 503, configured to execute step S3, and obtain obstacle distribution around the unmanned aerial vehicle according to the depth map;

the device also includes:

a time acquisition module 504 for acquiring the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3 before the step S1, the step S2 and the step S are executed;

a thread and ring queue establishing module 505, configured to establish at least two threads and at least one ring queue according to the execution time of the step S1, the execution time of the step S2, and the execution time of the step S3, and the at least two threads execute the step S1, the step S2, and the step S3, respectively, so as to reduce the total execution time;

and in the two threads executing the adjacent steps, the processing result of the thread executing the previous step is sent to the annular queue, and the thread executing the next step takes out the processing result from the annular queue and executes the next step according to the processing result.

According to the embodiment of the invention, at least two threads and at least one annular queue are established according to the execution time of each step in the process of processing the depth map by the unmanned aerial vehicle controller, and each step of processing the depth map is executed by the at least two threads respectively. Each thread can obtain the processing results of other threads through at least one ring queue. By adding the annular queue and running in parallel in multiple threads, the problem of image processing blockage is solved, and the time delay is reduced.

In some embodiments of the depth map processing apparatus 500, as shown in FIG. 6, the thread and ring queue building module 505 comprises:

a judgment sub-module 5051, configured to judge whether a sum of the execution times of the step S1, the step S2, and the step S3 meets a first preset condition;

a thread and ring queue establishing sub-module 5052, configured to establish a first thread, a second thread, and a first ring queue if a sum of the execution times of the step S1, the step S2, and the step S3 meets a first preset condition, where the step S1, the step S2, and the step S3 are executed by the first thread and the second thread, respectively;

and in the first thread and the second thread, the processing result of the thread executing the previous step is sent to the first ring queue, and the thread executing the next step takes out the processing result from the first ring queue and executes the next step according to the processing result.

In some embodiments of the depth map processing apparatus 500, the first preset condition is:

the sum of the execution times of the step S1, the step S2, and the step S3 is greater than a preset value.

In some embodiments of the depth map processing apparatus 500, the predetermined value is 1000/P, where P is the image frame rate.

In some embodiments of the depth map processing apparatus 500, the first thread performs two of the steps S1, S2, and S3, and the second thread performs one of the steps S1, S2, and S3;

the thread and ring queue building submodule 5052 is specifically configured to:

judging whether the sum of the execution time of the two steps executed by the first thread meets a second preset condition or not;

if so, establishing a third thread and a second ring queue, and executing the two steps by the first thread and the third thread respectively;

and in the first thread and the third thread, the processing result of the thread executing the previous step is sent to the second ring queue, and the thread executing the next step takes out the processing result from the second ring queue and executes the next step according to the processing result.

In some embodiments of the depth map processing apparatus 500, the second preset condition is that a sum of execution times of two steps executed by the first thread is greater than a preset value.

In some embodiments of the depth map processing apparatus 500, the predetermined value is 1000/P, where P is the image frame rate.

In some embodiments of the depth map processing apparatus 500, the controller comprises a hardware acceleration channel, the image acquisition apparatus comprises at least two sets of binocular elements;

then, the depth map obtaining module 502 is specifically configured to:

and sending the images collected by the at least two groups of binocular units to the hardware acceleration channel, and performing binocular matching on the images collected by the at least two groups of binocular units through time division multiplexing of the hardware acceleration channel to obtain the depth map.

In some embodiments of the depth map processing apparatus 500, the number of the hardware acceleration channels is at least two, and the images acquired by the at least two groups of binocular units include an image group with a first resolution and an image group with a second resolution, wherein the second resolution is greater than the first resolution;

then, the depth map obtaining module 502 is specifically configured to:

sending the image group with the second resolution to one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group;

and sending the image group with the resolution of the first resolution to another one of the at least two hardware acceleration channels for binocular matching so as to obtain a depth map corresponding to the image group.

In some embodiments of the depth map processing apparatus 500, the number of the hardware acceleration channels is 4, which are respectively a first hardware acceleration channel, a second hardware acceleration channel, a third hardware acceleration channel, and a fourth hardware acceleration channel;

the image acquisition device comprises 6 groups of binocular units, wherein in the 6 groups of binocular units, 4 groups of images acquired by the binocular units are images of the first resolution, and 2 groups of images acquired by the binocular units are images of the second resolution;

then, the depth map obtaining module 502 is specifically configured to:

sending the image groups collected by the 2 groups of binocular units to the first hardware acceleration channel and the second hardware acceleration channel respectively;

and sending the image groups collected by 2 groups of binocular units in the 4 groups of binocular units to the third hardware acceleration channel, and sending the image groups collected by the remaining 2 groups of binocular units in the 4 groups of binocular units to the fourth hardware acceleration channel.

It should be noted that the above-mentioned apparatus can execute the method provided by the embodiments of the present application, and has corresponding functional modules and beneficial effects for executing the method. For technical details which are not described in detail in the device embodiments, reference is made to the methods provided in the embodiments of the present application.

Fig. 7 is a schematic diagram of a hardware structure of the vision chip 20 in an embodiment of the drone 100, and as shown in fig. 7, the vision chip 20 includes:

one or more processors 21 and a memory 22, with one processor 21 being an example in fig. 7.

The processor 21 and the memory 22 may be connected by a bus or other means, and fig. 7 illustrates the connection by a bus as an example.

The memory 22, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the depth map processing method in the embodiment of the present application (for example, the image correction module 501, the depth map acquisition module 502, the obstacle distribution acquisition module 503, the time acquisition module 504, and the thread and ring queue establishment module 505 shown in fig. 5). The processor 21 executes various functional applications and data processing of the drone by running the non-volatile software programs, instructions and modules stored in the memory 22, that is, implements the depth map processing method of the above method embodiment.

The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the vision chip, and the like. Further, the memory 22 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 22 optionally includes memory located remotely from the processor 21, and these remote memories may be connected to the drone via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The one or more modules are stored in the memory 22 and, when executed by the one or more processors 21, perform the depth map processing method in any of the method embodiments described above, e.g., performing the method steps 101-105 of fig. 3 described above; the functions of the modules 501-505 in fig. 5 and the modules 501-505 and 5051-5052 in fig. 6 are realized.

The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.

Embodiments of the present application provide a non-transitory computer-readable storage medium storing computer-executable instructions, which are executed by one or more processors, such as one of the processors 21 in fig. 7, to enable the one or more processors to perform the depth map processing method in any of the method embodiments, such as performing the method steps 101 to 105 in fig. 3 described above; the functions of the modules 501-505 in fig. 5 and the modules 501-505 and 5051-5052 in fig. 6 are realized.

The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.

Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a general hardware platform, and may also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种实现近景摄影测量和三维可视化的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!