Remote sensing image data generation method, system and equipment

文档序号:1182804 发布日期:2020-09-22 浏览:11次 中文

阅读说明:本技术 一种遥感影像数据生成方法、系统及设备 (Remote sensing image data generation method, system and equipment ) 是由 汪驰升 唐倩迪 胡忠文 张德津 涂伟 周宝定 李清泉 于 2020-06-12 设计创作,主要内容包括:本发明提供了一种遥感影像数据生成方法、系统及设备,通过在飞机上的拍摄设备拍摄目标区域内的多组多聚焦倾斜影像;对采集到的多组多聚焦倾斜影像进行图像配准和对重叠区域进行融合后得到多组拼接影像;将全部所述拼接影像进行三维重建生成密集点云;对所述密集点云生成数字表面模型;基于所述数字表面模型和各个拍摄设备所在的位置信息对所述融合后的多组拼接影像进行矫正,得到正射影像;将所述正射影像拼接成所述目标区域的遥感影像数据。本实施例公开的方法成本低廉,时效性好,且数据分辨率高,实现了低成本的收集并处理影像数据,得到高分辨率的遥感影像数据。(The invention provides a method, a system and equipment for generating remote sensing image data, wherein multiple groups of multi-focus inclined images in a target area are shot by shooting equipment on an airplane; carrying out image registration on the collected multiple groups of multi-focus oblique images and fusing overlapping areas to obtain multiple groups of spliced images; performing three-dimensional reconstruction on all the spliced images to generate dense point cloud; generating a digital surface model for the dense point cloud; correcting the fused multiple groups of spliced images based on the digital surface model and the position information of each shooting device to obtain an orthoimage; and splicing the orthoimages into remote sensing image data of the target area. The method disclosed by the embodiment has the advantages of low cost, good timeliness and high data resolution, and realizes low-cost collection and processing of image data to obtain high-resolution remote sensing image data.)

1. A method for generating remote sensing image data is characterized by comprising the following steps:

shooting a plurality of groups of multi-focus inclined images in a target area by shooting equipment on an airplane;

carrying out image registration on the collected multiple groups of multi-focus inclined images, and fusing overlapped areas in the multi-focus inclined images after each group of images are registered to obtain multiple groups of spliced images after image registration and fusion;

performing three-dimensional reconstruction on all the spliced images to obtain dense point cloud;

generating a digital surface model from the dense point cloud;

correcting the fused multiple groups of spliced images based on the digital surface model and the position information of each shooting device to obtain corrected orthoimages;

and splicing the corrected orthographic images into remote sensing image data of the target area.

2. The method for generating remote sensing image data according to claim 1, wherein the step of photographing the plurality of sets of multi-focus oblique images in the target area by the photographing apparatus on the airplane comprises:

and aiming at the same shooting target in the target area, changing the optical axis angle, the focal length and the focal point of the shooting equipment, respectively acquiring a near focusing image and a far focusing image, and taking the acquired near focusing image and the acquired far focusing image as a multi-focus inclined image group.

3. The method for generating remote sensing image data according to claim 1, wherein said step of image registering the plurality of sets of multi-focus oblique images includes:

and identifying the feature points of a group of images by using an SIFT algorithm, respectively generating feature vector sets of the two images, matching the feature points in the two feature vector sets, and deleting the error matching points to obtain a result after matching and correction.

4. The method for generating remote sensing image data according to claim 3, wherein the step of fusing overlapped regions in the multi-focus oblique image after each group of image registration to obtain a plurality of groups of spliced images after image registration and fusion comprises:

and performing fusion processing on overlapping areas in the near-focus image and the far-focus image contained in each group of multi-focus oblique images by using a Laplacian pyramid fusion algorithm.

5. The method for generating remote sensing image data according to claim 4, wherein the step of performing fusion processing on the overlapping regions existing in the near-focus image and the far-focus image included in each group of multi-focus oblique images by using the Laplacian pyramid fusion algorithm includes:

respectively carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on a near focusing image and a far focusing image in the same group of multi-focusing inclined images to respectively obtain a Gaussian pyramid and a Laplacian pyramid of the two images; the image layer of the Gaussian pyramid is an N +1 layer, and the image layer of the Laplacian pyramid is an N layer;

establishing an N + 1-layer Gaussian pyramid by using a preset binary mask;

taking each layer of binary mask in the N +1 layers of Gaussian pyramids established by the binary mask as a weight, and adding corresponding layers of the Laplacian pyramid of the near focusing image and the far focusing image in the same group of multi-focus inclined images to obtain a first pyramid;

adding the N +1 th image layers of the Gaussian pyramid of the near focusing image and the far focusing image in the same group of multi-focusing inclined images to obtain a first fusion image;

and reconstructing the first pyramid by using the first fusion image to obtain a fused spliced image.

6. The method for generating remote sensing image data according to claim 2, wherein the step of performing three-dimensional reconstruction on all the stitched images to obtain dense point cloud comprises:

acquiring flight track data of the airplane, acquiring GPS (global positioning system) information of each second in a flight time period according to the flight track data, and matching the GPS information corresponding to the flight time with the shooting time of the near-focus image to obtain longitude and latitude and altitude information of shooting equipment when the image is shot;

and performing three-dimensional reconstruction on the fused multiple groups of spliced images based on a three-dimensional reconstruction algorithm to obtain three-dimensional dense point cloud.

7. The method for generating remote sensing image data according to claim 6, wherein the step of performing three-dimensional reconstruction on the fused groups of spliced images based on a three-dimensional reconstruction algorithm to obtain a three-dimensional dense point cloud comprises:

extracting image feature points of each group of spliced images, matching the feature points between every two adjacent spliced images, removing repeated feature point matching pairs, and extracting common feature matching points;

connecting the common characteristic matching points to form a connecting track;

estimating camera external parameters of the initialization matching pair, triangularizing the connecting track to obtain an initialized 3D point;

performing adjustment optimization of a light beam method on the spliced image to obtain camera estimation parameters and scene geometric information to obtain sparse 3D point cloud;

and optimizing the position information and the EXIF direction data of the shooting equipment by using the ground control point, and interpolating the sparse 3D point cloud according to the optimized position information, the EXIF direction data and the DEM ground elevation data of the shooting equipment to generate the dense point cloud.

8. The method for generating remote-sensing image data according to claim 7, wherein after the step of stitching the corrected ortho images into the remote-sensing image data of the target region, the method further comprises:

and calculating the spatial resolution of the remote sensing image data according to the flying height of the airplane, the lens focal length of the shooting equipment, the pixel size and the sensor size.

9. A remote sensing image data generation system, comprising: the system comprises a shooting device arranged on an airplane, a processor and a storage medium in communication connection with the processor, wherein the storage medium is suitable for storing a plurality of instructions; the processor is adapted to call instructions in the storage medium to perform a method of generating remote sensing image data according to any of the preceding claims 1-8.

10. A computer readable storage medium, storing one or more programs which are executable by one or more processors to implement the steps of the method for generating remote sensing image data according to any one of claims 1 to 8.

Technical Field

The invention relates to the technical field of geographic mapping, in particular to a method, a system and equipment for generating remote sensing image data.

Background

The remote sensing technology starts in the 60 th of the 20 th century, and can collect surface radiation and electromagnetic wave information of a target object through various sensing instruments at a long distance and process and image the information, so that the requirements for detecting and identifying a target scene are met, and the remote sensing technology is an important acquisition mode of geographic information.

The task of collecting geographic information usually requires that a sensor is fixed on a platform such as a balloon, an airplane, a satellite, a spacecraft, a space laboratory and the like, and then data is transmitted and processed into usable data. The traditional remote sensing technologies have the defects of high cost, long period, lack of instantaneity, multiple data quality image factors and the like.

Therefore, the prior art is subject to further improvement.

Disclosure of Invention

In view of the defects in the prior art, the invention aims to provide a method, a system and equipment for generating remote sensing data for a user, and overcomes the defects of high geographic information acquisition cost, long quantity transmission and processing period, poor instantaneity and more factors influencing data instructions in the prior art in the remote control technology.

The technical scheme adopted by the invention for solving the technical problem is as follows:

in a first aspect, the present embodiment discloses a method for generating remote sensing image data, including the steps of:

shooting a plurality of groups of multi-focus inclined images in a target area by shooting equipment on an airplane;

carrying out image registration on the collected multiple groups of multi-focus inclined images, and fusing overlapped areas in the multi-focus inclined images after each group of images are registered to obtain multiple groups of spliced images after image registration and fusion;

performing three-dimensional reconstruction on all the spliced images to obtain dense point cloud;

generating a digital surface model from the dense point cloud;

correcting the fused multiple groups of spliced images based on the digital surface model and the position information of each shooting device to obtain corrected orthoimages;

and splicing the corrected orthographic images into remote sensing image data of the target area.

Optionally, the step of shooting multiple sets of multi-focus oblique images in the target area by the shooting device on the airplane includes:

and aiming at the same shooting target in the target area, changing the optical axis angle, the focal length and the focal point of the shooting equipment, respectively acquiring a near focusing image and a far focusing image, and taking the acquired near focusing image and the acquired far focusing image as a multi-focus inclined image group.

Optionally, the step of performing image registration on the multiple sets of multi-focus oblique images includes:

and identifying the feature points of a group of images by using an SIFT algorithm, respectively generating feature vector sets of the two images, matching the feature points in the two feature vector sets, and deleting the error matching points to obtain a result after matching and correction.

Optionally, the step of fusing the overlapped regions in the multi-focus oblique images after each group of images are registered to obtain multiple groups of spliced images after image registration and fusion includes:

and performing fusion processing on overlapping areas in the near-focus image and the far-focus image contained in each group of multi-focus oblique images by using a Laplacian pyramid fusion algorithm.

Optionally, the step of performing fusion processing on the overlapping regions existing in the near-focus image and the far-focus image contained in each group of multi-focus oblique images by using the laplacian pyramid fusion algorithm includes:

respectively carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on a near focusing image and a far focusing image in the same group of multi-focusing inclined images to respectively obtain a Gaussian pyramid and a Laplacian pyramid of the two images; the image layers of the Gaussian pyramid are N +1 layers, and the image layers of the Laplace pyramid are N layers;

establishing an N + 1-layer Gaussian pyramid by using a preset binary mask;

taking each layer of binary mask in the N +1 layers of Gaussian pyramids established by the binary mask as a weight, and adding corresponding layers of the Laplacian pyramid of the near focusing image and the far focusing image in the same group of multi-focus inclined images to obtain a first pyramid;

adding the N +1 th image layers of the Gaussian pyramid of the near focusing image and the far focusing image in the same group of multi-focusing inclined images to obtain a first fusion image;

and reconstructing the first pyramid by using the first fusion image to obtain a plurality of groups of fused spliced images.

Optionally, the step of performing three-dimensional reconstruction on all the stitched images to obtain a dense point cloud includes:

acquiring flight track data of the airplane, acquiring GPS (global positioning system) information of each second in a flight time period according to the flight track data, and matching the GPS information corresponding to the flight time with the shooting time of the near-focus image to obtain longitude and latitude and altitude information of shooting equipment when the image is shot;

and performing three-dimensional reconstruction on the fused multiple groups of spliced images based on a three-dimensional reconstruction algorithm to obtain three-dimensional dense point cloud.

Optionally, the step of performing three-dimensional reconstruction on the fused multiple groups of stitched images based on the three-dimensional reconstruction algorithm to obtain a three-dimensional dense point cloud includes:

extracting image feature points of each group of spliced images, matching the feature points between every two adjacent spliced images, removing repeated feature point matching pairs, and extracting common feature matching points;

connecting the common characteristic matching points to form a connecting track;

estimating camera external parameters of the initialization matching pair, triangularizing the connecting track to obtain an initialized 3D point;

performing adjustment optimization of a light beam method on the spliced image to obtain camera estimation parameters and scene geometric information to obtain sparse 3D point cloud;

and optimizing the position information and the EXI F direction data of the shooting equipment by using the ground control point, and interpolating the sparse 3D point cloud according to the optimized position information, the EXIF direction data and the DEM ground elevation data of the shooting equipment to generate the dense point cloud.

Optionally, after the step of splicing the corrected ortho images into the remote sensing image data of the target region, the method further includes:

and obtaining the spatial resolution corresponding to the remote sensing image data according to the flying height of the airplane, the lens focal length of the shooting equipment, the pixel size and the sensor size.

In a second aspect, the present embodiment further discloses a remote sensing image data generating system, including: the system comprises a shooting device arranged on an airplane, a processor and a storage medium in communication connection with the processor, wherein the storage medium is suitable for storing a plurality of instructions; the processor is suitable for calling instructions in the storage medium to execute the method for generating the remote sensing image data.

In a third aspect, the present embodiment further discloses a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs are executable by one or more processors to implement the steps of the method for generating remote sensing image data.

The method, the system and the equipment for generating the remote sensing image data have the advantages that a shooting device on an airplane is used for shooting a plurality of groups of multi-focus inclined images in a target area; carrying out image registration on the collected multiple groups of multi-focus oblique images and fusing overlapping areas to obtain multiple groups of spliced images; performing three-dimensional reconstruction on all the spliced images to obtain dense point cloud; correcting the dense point cloud, and generating a digital surface model based on the corrected dense point cloud; correcting the fused multiple groups of spliced images based on the digital surface model and the position information of each shooting device to obtain corrected orthoimages; and splicing the corrected orthographic images into remote sensing image data of the target area. The method disclosed by the embodiment has the advantages of low cost, good timeliness and high data resolution, and realizes low-cost collection and processing of image data to obtain high-resolution remote sensing image data.

Drawings

Fig. 1 is a flowchart illustrating steps of a method for generating remote sensing image data according to an embodiment of the present invention;

FIG. 2 is a schematic diagram illustrating an image acquisition process according to an embodiment of the present invention;

FIG. 3 is a schematic diagram illustrating a calculation principle of resolution of a remote sensing image according to an embodiment of the present invention;

fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

In the prior art, a sensor is generally used for collecting surface radiation and electromagnetic wave information of a target object in the remote sensing technology, so an instrument for collecting data is expensive, a large amount of data collected by the sensor is processed, the processing time is long, a large amount of manpower is needed, and the data processing time is long, so that the instantaneity of finally obtained remote sensing data is poor, and the requirements of easiness in data collection and timeliness of data cannot be met.

Based on the above problems in the prior art, the inventor found that with the development of global economy, airliners in the world have a large number of airliners and short revisit periods, and with the popularization of portable cameras, passengers of the airplane can often take high-resolution images of the earth's surface from high-altitude perspectives. If the image data can be collected and processed into applicable remote sensing data, the defects of the traditional remote sensing technology can be overcome to a certain extent, and huge manpower and material resources are saved.

The implementation discloses a remote sensing image data generation method, which comprises the steps of shooting a series of continuous multi-group multi-focusing inclined images by using a camera in flight by using an airplane passenger, carrying out image registration on each group of images by using an SIFT algorithm, and fusing overlapped parts of each group of registered images by using a Laplace pyramid fusion algorithm; and performing three-dimensional reconstruction on all spliced and fused image groups through an SfM algorithm to generate dense point clouds, performing interpolation processing on the dense point clouds, deleting noise to generate a digital surface model, correcting multiple groups of images into orthoimages based on the generated digital surface model and the position information of each camera, and splicing the plurality of orthoimages to obtain remote control image data of a target area.

The invention will be further explained by the description of the embodiments with reference to the drawings.

The embodiment discloses a method for generating remote sensing image data, as shown in fig. 1, comprising the steps of:

step S1, a shooting device on the airplane shoots a plurality of sets of multi-focus oblique images in the target area.

Because the existing camera shooting equipment is common, passengers on the airplane may carry mobile terminals with certain shooting equipment or high-resolution cameras on their hands, so that the passengers can downwards acquire images of the ground through the windows of the airplane, and the shot images contain ground information. The image taken is an oblique image because it is taken from the airplane downward. In order to acquire more accurate ground information, the images shot in the step include images of the same shooting target on different focus points, namely, a near focus image and a far focus image are respectively shot, the images acquired for the same shooting target are divided into the same group, multiple groups of images shot for different shooting targets form multiple groups of multi-focus inclined images, and the different shooting targets cover the whole target area.

Further, the step of shooting the multiple sets of multi-focus oblique images in the target area by the shooting device on the airplane comprises the following steps:

and aiming at the same shooting target in the target area, changing the optical axis angle, the focal length and the focal point of the shooting equipment, respectively acquiring a near focusing image and a far focusing image, and taking the acquired near focusing image and the acquired far focusing image as a multi-focus inclined image group.

The shooting device in this embodiment may be a smartphone or a consumer camera having a shooting function. Referring to fig. 2, when taking a picture, a passenger takes two images, i.e., a near-focus image imga (i) and a far-focus image imgb (i), of the same shooting target in a target area by changing the angle, the focal length, and the focal point of the camera, and uses the near-focus image and the far-focus image as an image group imgc (i), where i represents a group and it is required to ensure that the i groups of pictures cover the target area.

In one embodiment, the near focus image imga (i) is taken at an angle of about 3 ° + FOV/2 from the ground plumb line, where 3 ° is the camera tilt angle, FOV is the camera field angle, and the focus point is one quarter below the image; the angle between the optical axis of the camera and the ground plumb line is about 3 degrees + FOV when the far focus image imgb (i) is shot, ensuring that the overlapping part of the near and far focus images occupies 1/2, the focus point is one quarter below the image, and the lens focal length is properly adjusted.

And step S2, carrying out image registration on the collected multiple groups of multi-focus oblique images, and fusing overlapped areas in the multi-focus oblique images after each group of images are registered to obtain multiple groups of spliced images after image registration and fusion.

The multiple sets of multi-focus oblique images captured in step S1 are first subjected to image registration, where the image registration is to align different images of the same scene at spatial positions, and then the images after image registration are subjected to image fusion, where the image fusion is to smooth overlapping regions included in two or more images after image registration, so that the transition is natural.

Specifically, the multi-focus oblique image stitching comprises two steps: image registration and image fusion.

(1) Image registration

The step of image registration of the plurality of groups of multi-focus oblique images comprises:

and identifying the characteristic points of a group of images by using an image identification algorithm, respectively obtaining a characteristic vector set containing characteristic vectors corresponding to the specific points of the two images, matching the characteristic points in the two characteristic vector sets, and deleting the error matching points to obtain a result after matching and correction.

Specifically, the SIFT algorithm is used for identifying the feature points of each group of multi-focus tilt images, feature vector sets of the two images are respectively generated, the two feature vector sets are matched by using the Best-bin-first algorithm (BBF algorithm), and then the RANSAC algorithm is used for eliminating the mismatching points to perform matching correction.

(2) Image fusion

The step of fusing the overlapped regions in the multi-focus oblique images after each group of images are registered to obtain a plurality of groups of spliced images after the images are registered and fused comprises the following steps:

and performing fusion processing on overlapping areas in the near-focus image and the far-focus image contained in each group of multi-focus oblique images by using a Laplacian pyramid fusion algorithm.

Specifically, the fusion processing step includes:

respectively carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on a near focusing image and a far focusing image in the same group of multi-focusing inclined images to obtain a Gaussian pyramid and a Laplacian pyramid of the two images; the image layer of the gaussian pyramid is an N +1 layer, and the laplacian pyramid is an N layer.

Establishing an N + 1-layer Gaussian pyramid by using a preset binary mask;

adding layers corresponding to laplacian pyramids corresponding to near focusing images and far focusing images in the same group of multi-focus oblique images by taking the binary mask as a weight to obtain a first pyramid, and adding layers corresponding to N +1 th layers of the laplacian pyramids corresponding to the near focusing images and the far focusing images to obtain a first fusion image; wherein N is a positive integer;

and reconstructing the first pyramid by using the first fusion image to obtain a plurality of groups of fused spliced images.

Smoothing the overlapped area of IMGA (i) and IMGB (i) of each group of images by adopting a Laplacian Pyramid fusion algorithm (LPB algorithm), and dividing into the following steps:

firstly, two images of the same scene of the same shooting target and a group of different focus points are respectively subjected to Laplacian pyramid decomposition, and the number N of layers is taken as a parameter.

The gaussian pyramid decomposition is the basis of the laplacian pyramid decomposition, and the laplacian pyramid ith layer image is defined as:

Figure BDA0002536903450000081

in the formula, LiRepresenting the i-th Laplace image, GiIndicates the i-th layer Gaussian image, UP ()The operation is to map the pixel with the position (x, y) in the source image (wherein, the source image and the target image are both Gaussian images) to the position (2x +1,2y +1) in the target image, that is, the source image is up-sampled and signed

Figure BDA0002536903450000082

Which represents a convolution of the signals of the first and second,a Gaussian kernel of 5 × 5 equation means that the ith Laplacian image LiIs the i-th layer Gaussian image GiSubtract the i +1 st layer Gaussian image Gi+1The upsampled and gaussian blurred result.

And secondly, generating and transmitting a binary mask representing a fusion position, namely the overlapping part of the image group.

And thirdly, establishing a Gaussian pyramid for the binary mask, wherein the number of layers is N + 1.

④ establishing a Gaussian pyramid by binary mask, adding the Laplacian pyramid of IMGA (i) and IMGB (i) to obtain a new first pyramid, and adding the N +1 layer Gaussian pyramids of the two images to obtain a first fused image (IMG (i))1

Fourthly, after the new image information of the first pyramid is obtained, the first pyramid is reconstructed, and a final spliced image is obtained.

Specifically, for the first fused image IMG (i)1Up-sampling, and adding the image obtained after up-sampling with the first pyramid top layer (namely adding in the N layers of the first pyramid) to obtain a second fusion image IMG (i)2For the second fused image IMG (i)2Up-sampling is carried out, and the image obtained by up-sampling the second fusion image is added with the N-1 layer of the first pyramid to obtain IMG (i)3Repeating the process until IMG (i)nAdding with the 1 st layer of the first pyramid to obtain IMG (i)N+1The image IMG (i) obtained hereN+1The result of the fusion of the last two images is the spliced image.

And step S3, performing three-dimensional reconstruction on all the spliced images to obtain dense point clouds.

And establishing a three-dimensional point cloud of the target area based on the spliced image obtained in the step, namely the dense point cloud.

Specifically, the step of performing three-dimensional reconstruction on all the spliced images to obtain the dense point cloud includes:

extracting image feature points of each group of spliced images, matching the feature points between every two adjacent spliced images, removing repeated feature point matching pairs, and extracting common feature matching points;

connecting the common characteristic matching points to form a connecting track;

estimating external parameters of the initialization matching pair, triangularizing the connecting track to obtain an initialized 3D point;

performing adjustment optimization of a light beam method on the spliced image to obtain camera estimation parameters and scene geometric information to obtain sparse 3D point cloud;

and optimizing the position information and the EXIF direction data of the shooting equipment by using the ground control point, and interpolating the sparse 3D point cloud according to the optimized position information, the EXIF direction data and the DEM ground elevation data of the shooting equipment to generate the dense point cloud.

In one embodiment, after each multi-focus oblique image group is fused, the position information and camera internal parameter information of the image group are the information of the near-focus image imga (i) in the image group, and the image position and camera internal parameter are initialized.

The method comprises the following specific steps:

downloading flight track data of a flight from a flight tracking service website, acquiring GPS (global positioning system) information of each second in a flight period by adopting a linear interpolation method, and matching the GPS information with shooting time information of an image IMGA (i) to obtain longitude, latitude and altitude information of a camera when an image of an IMGC (i) group is shot.

And processing the fused image group based on an SfM algorithm in the field of computer vision, and performing three-dimensional reconstruction on the time series two-dimensional images based on a multi-view geometric principle. The algorithm mainly comprises the following processes: extracting and matching image characteristic points; estimating camera parameters; a 3D point cloud is generated.

Specifically, the image feature point extraction and matching includes: firstly, finding a matching point between every two adjacent spliced images from each spliced image to match the characteristic points between every two adjacent spliced images, removing repeated characteristic point matching pairs and extracting common characteristic matching points; secondly, estimating a camera position corresponding to each spliced image, constructing a sparse point cloud model, and specifically connecting each common feature matching point to form a connecting track; estimating external parameters of the initialization matching pair, triangularizing the connecting track to obtain an initialized 3D point; performing adjustment optimization of a light beam method on the spliced image to obtain camera estimation parameters and scene geometric information, obtaining sparse 3D point cloud, and establishing a sparse point cloud model; then selecting a plurality of ground control points, and optimizing the position and direction data of the camera by using the ground control points to ensure the precision of the geographic position; and then, improving the calculation precision of the internal and external parameters of the camera by using the control points, and finally performing interpolation processing on the constructed sparse 3D point cloud according to the optimized position of the camera and the added DEM ground elevation data so as to construct the dense point cloud.

In one implementation mode, points with obvious characteristics such as road intersections and the like are manually selected as ground control points, coordinates of points with the same name are obtained from Google Earth, the ground elevation of the control points is extracted from the data of a satellite-borne thermal radiation and reflectometer (ASTER) 30-meter-resolution Global Digital Elevation Model (GDEM), and the geographic information of the control points is input to finish the correction work of point clouds.

And step S4, generating a digital surface model according to the dense point cloud.

And interpolating the dense point cloud obtained by the SfM algorithm, manually deleting noise points, generating a Digital Surface Model (DSM) in a raster image mode, and correcting each group of fused images into an orthoimage based on the DSM, a camera and the position of a control point.

And step S5, correcting the fused multi-group spliced images based on the digital surface model and the position information of each shooting device to obtain corrected orthoimages.

The plurality of sets of images are rectified into an orthoimage based on the DSM and the respective camera positions. To reduce distortion, the dense point cloud generated DEM is replaced with GDEM.

And step S6, splicing the corrected orthoimages into remote sensing image data of the target area.

Since the corrected plurality of ortho images are also images captured for a certain scene, it is necessary to splice the plurality of ortho images to obtain remote sensing image data in the entire target region.

Further, after this step, the method further includes calculating the spatial resolution of the remote sensing image data, and specifically, the step of calculating the spatial resolution of the remote sensing image data includes:

and obtaining the spatial resolution corresponding to the remote sensing image data according to the flying height of the airplane, the lens focal length of the shooting equipment, the pixel size and the sensor size.

Referring to fig. 3, the spatial resolution may be calculated according to a Ground Sample resolution (GSD) calculation formula, which is as follows:

Figure BDA0002536903450000111

in equation (2), the flying height is approximately calculated as the distance from the camera to the center of the ground scene, and 3 parameters of the sensor size, the lens focal length, and the pixel size are obtained from the image EXIF header file.

The spatial resolution of the oblique image changes along with the change of the distance, and the average value of the maximum resolution and the minimum resolution obtained by calculating the source images IMGA (i) and IMGB (i) is the spatial resolution.

Figure BDA0002536903450000122

(3) And (4) formula (I) wherein CSD (A)i)、CSD(Bi) Representing the spatial resolution of IMGA (i) and IMGB (i), respectively. Wherein, the spatial resolution of the ortho image is averaged.

The embodiment also discloses a remote sensing image data generation system, which includes: the system comprises a shooting device arranged on an airplane, a processor and a storage medium in communication connection with the processor, wherein the storage medium is suitable for storing a plurality of instructions; the processor is suitable for calling instructions in the storage medium to execute the method for generating the remote sensing image data.

Specifically, as shown in fig. 4, the remote sensing image data generating system includes a plurality of shooting devices, at least one processor (processor)20 and a memory (memory)22, and may further include a display screen 21, a communication Interface (Communications Interface)23, and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may call logic instructions in the memory 22 to perform the methods in the embodiments described above.

Furthermore, the logic instructions in the memory 22 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.

The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 30 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.

The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.

The embodiment also discloses a computer readable storage medium, wherein the computer readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the steps of the method for generating the remote sensing image data.

The specific processes loaded and executed by the instruction processors in the storage medium and the terminal are described in detail in the method, and are not described in detail herein.

The invention provides a method, a system and equipment for generating remote sensing image data, wherein multiple groups of multi-focus inclined images in a target area are shot by shooting equipment on an airplane; carrying out image registration on the collected multiple groups of multi-focus oblique images and fusing overlapping areas to obtain multiple groups of spliced images; performing three-dimensional reconstruction on all the spliced images to obtain dense point cloud; correcting the dense point cloud, and generating a digital surface model based on the corrected dense point cloud; correcting the fused multiple groups of spliced images based on the digital surface model and the position information of each shooting device to obtain corrected orthoimages; and splicing the corrected orthographic images into remote sensing image data of the target area. The method disclosed by the embodiment has the advantages of low cost, good timeliness and high data resolution, and realizes low-cost collection and processing of image data to obtain high-resolution remote sensing image data.

Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种具备实时预警性能的智慧湖泊检测系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!