Voronoi clipping of images for post-field generation

文档序号:991327 发布日期:2020-10-20 浏览:14次 中文

阅读说明:本技术 用于后场生成的图像的沃罗诺伊裁剪 (Voronoi clipping of images for post-field generation ) 是由 昆廷·林德赛 于 2019-01-25 设计创作,主要内容包括:一种方法和系统,包括:界定地理区域(220、502);接收多个图像(202、204、206、208、210、212、214、216);确定多个图像点(510);基于多个图像点将地理区域划分成多个图像区域(512);以及基于多个图像区域将多个图像拼接成组合图像(520)。(A method and system, comprising: defining a geographic area (220, 502); receiving a plurality of images (202, 204, 206, 208, 210, 212, 214, 216); determining a plurality of image points (510); dividing a geographic area into a plurality of image regions based on a plurality of image points (512); and stitching the plurality of images into a combined image based on the plurality of image regions (520).)

1. A method, comprising:

defining a geographic area;

receiving a plurality of images, wherein at least a portion of each received image is located within a defined geographic area;

determining a plurality of image points, wherein each image point is a geographic location of a central field of view of each of the received plurality of images;

dividing the geographic region into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point of the plurality of image points than any other image point of the plurality of image points; and

stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region.

2. The method of claim 1, wherein dividing the geographic area into the plurality of image areas further comprises:

generating a Voronoi diagram.

3. The method of claim 1, further comprising:

capturing the plurality of images by an aerial vehicle.

4. The method of claim 3, wherein the aerial vehicle is a vertical take-off and landing (VTOL) Unmanned Aerial Vehicle (UAV).

5. The method of claim 1, further comprising:

one or more of the received plurality of images are filtered.

6. The method of claim 5, wherein filtering the one or more images further comprises:

removing the one or more images as a result of at least one of: overexposure, underexposure, distortion, blur and errors in the camera taking an image.

7. The method of claim 1, further comprising:

one or more image enhancements are applied to one or more of the received plurality of images.

8. The method of claim 7, wherein applying image enhancement to the one or more images comprises at least one of: brightening, darkening, color correction, white balancing, sharpening, correcting for lens distortion, and adjusting contrast.

9. A system, comprising:

an Unmanned Aerial Vehicle (UAV), comprising:

a processor having an addressable memory;

a sensor in communication with the processor, the sensor configured to capture a plurality of images; and

wherein the processor is configured to:

receiving a geographic area;

receiving a plurality of images from the sensor;

determining a plurality of image points, wherein each image point is a geographic location of a central field of view of each image;

dividing the geographic region into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point than any other image point; and

stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region.

10. The system of claim 9, wherein the UAV further comprises:

a Global Positioning System (GPS) in communication with the processor, wherein the processor determines the geographic location of each image point of the plurality of image points using the GPS.

11. The system of claim 9, wherein the processor is further configured to generate a voronoi diagram to divide the geographic region into the plurality of image regions.

12. The system of claim 9, wherein the UAV is a vertical take-off and landing (VTOL) UAV.

13. The system of claim 9, wherein the processor is further configured to:

filtering one or more images of the received plurality of images, wherein filtering the one or more images further comprises: removing the one or more images as a result of at least one of: overexposure, underexposure, distortion, blur and errors in the camera taking an image.

14. The system of claim 9, wherein the processor is further configured to:

applying one or more image enhancements to one or more images of the received plurality of images, wherein applying image enhancements to the one or more images comprises at least one of: brightening, darkening, color correction, white balancing, sharpening, correcting for lens distortion, and adjusting contrast.

15. The system of claim 9, further comprising:

a controller, the controller comprising:

a processor having an addressable memory, wherein the processor is configured to:

defining the geographic area;

sending the geographic area to the UAV; and

receiving the combined image from the UAV.

16. The system of claim 15, further comprising:

a computing device comprising a processor and an addressable memory, wherein the processor is configured to:

receiving the combined image from at least one of the UAV and the controller; and

the combined image is analyzed.

17. The system of claim 16, wherein analyzing the combined image comprises comparing the combined image to historical combined images.

18. The system of claim 16, wherein the processor of the computing device is further configured to smooth the combined image to account for at least one of: brightness, color, dead pixels, and lens distortion.

19. The system of claim 16, wherein the processor of the computing device is further configured to:

receiving the plurality of images; and

stitching the plurality of images into a high resolution combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region.

20. A method, comprising:

receiving a plurality of images;

determining a plurality of image points, wherein each image point is a geographic location of a central field of view of each of the received plurality of images;

dividing the geographic region into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point of the plurality of image points than any other image point of the plurality of image points;

stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region; and

each image region in the combined image is expanded by a set amount such that a respective boundary of each image region overlaps each adjacent image region.

Technical Field

In several embodiments of the present invention, the present invention relates to image stitching, and more particularly to image stitching of aerial images (aerial images).

Background

Image stitching is the process of combining multiple images with overlapping fields of view to create a combined image. There are many ways to combine multiple images. Some of these combinations may lead to distortion, visual artifacts, visible seams, alignment problems, and the like.

SUMMARY

Some embodiments may include a method comprising: defining a geographic area; receiving a plurality of images, wherein at least a portion of each received image is located within a defined geographic area; determining a plurality of image points, wherein each image point is a geographic location of a central field of view of each of the received plurality of images; dividing the geographic region into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point of the plurality of image points than any other image point of the plurality of image points; and stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region.

In additional method embodiments, dividing the geographic area into the plurality of image areas may further comprise: voronoi diagram (Voronoi diagram) was generated. Additional method embodiments may include capturing a plurality of images by an aerial vehicle. Although any of a variety of air vehicles may be used, in some embodiments, the aircraft may be a vertical take-off and landing (VTOL) Unmanned Aerial Vehicle (UAV). Similarly, any of a variety of image capture rates may be used, for example, in at least one embodiment, the aircraft captures two photographs per second.

Additional method embodiments may include filtering one or more of the received plurality of images. Filtering the one or more images may further include removing the one or more images due to at least one of: overexposure, underexposure, distortion, blur and errors in the camera taking an image. Additional method embodiments may include applying one or more image enhancements to one or more of the received plurality of images. Applying image enhancement to one or more images may include at least one of: brightening, darkening, color correction, white balancing, sharpening, correcting for lens distortion, and adjusting contrast.

System embodiments may, for example, include: an Unmanned Aerial Vehicle (UAV), comprising: a processor having an addressable memory; a sensor in communication with the processor, the sensor configured to capture a plurality of images; and a Global Positioning System (GPS) in communication with the processor; wherein the processor may be configured to: receiving a geographic area; receiving a plurality of images from the sensor; dividing the geographic region into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point than any other image point; and stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region. In some system embodiments, the UAV may further include a Global Positioning System (GPS) in communication with the processor, wherein the processor may use the GPS to determine a geographic location of each of the plurality of image points.

In additional system embodiments, the processor may be further configured to generate a voronoi diagram to divide the geographic area into a plurality of image areas. The processor may be configured to capture two images per second via the sensor. The UAV may be a vertical take-off and landing (VTOL) UAV. In some embodiments, the processor may be further configured to filter one or more images of the received plurality of images, wherein filtering the one or more images may further comprise: removing the one or more images due to at least one of: overexposure, underexposure, distortion, blur and errors in the camera taking an image. In some embodiments, the processor may be further configured to apply one or more image enhancements to one or more images of the received plurality of images, wherein applying image enhancements to the one or more images may include at least one of: brightening, darkening, color correction, white balancing, sharpening, correcting for lens distortion, and adjusting contrast.

The system may further include a controller comprising: a processor having an addressable memory, wherein the processor is configured to: defining a geographic area; sending the geographic area to the UAV; and receive a combined image from the UAV. The system may further comprise: a computing device comprising a processor and an addressable memory, wherein the processor is configured to: receiving a combined image from at least one of the UAV and the controller; and analyzing the combined image. Analyzing the combined image may include comparing the combined image to historical combined images. Analyzing the combined image may include determining at least one of: crop stress, water problems, and estimated crop yield. The processor of the computing device may be further configured to smooth the combined image to account for at least one of: brightness, color, dead pixels, and lens distortion. The processor of the computing device may be further configured to: receiving a plurality of images; and stitching the plurality of images into a high resolution combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region.

Additional method embodiments may include: receiving a plurality of images; determining a plurality of image points, wherein each image point may be a geographic location of a central field of view of each of the received plurality of images; dividing the geographic region into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point of the plurality of image points than any other image point of the plurality of image points; stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region; and expanding each image region in the combined image by a set amount such that a respective boundary of each image region overlaps each adjacent image region.

33页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于选择出车辆的可能位置的有限或空假设集合的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!