Visual positioning method and device

文档序号:565702 发布日期:2021-05-18 浏览:9次 中文

阅读说明:本技术 视觉定位方法和装置 (Visual positioning method and device ) 是由 冯文森 张欢 曹军 唐忠伟 李江伟 于 2020-02-27 设计创作,主要内容包括:本方案涉及人工智能(Artificial Intelligence,AI)领域,具体的,本方案提供一种视觉定位方法和装置。本申请实施例的视觉定位方法,包括:获取采集的第一图像;根据所述第一图像和航拍模型,确定第一位姿;判断空地模型中是否存在所述第一位姿对应的地面模型;当存在所述第一位姿对应的地面模型时,根据所述地面模型确定第二位姿;其中,所述空地模型包括航拍模型和映射至所述航拍模型中的地面模型,所述地面模型的坐标系与所述航拍模型的坐标系相同,所述第二位姿的定位精度高于所述第一位姿的定位精度。本申请实施例可以提升视觉定位的成功率和定位精度。(The scheme relates to the field of Artificial Intelligence (AI), and particularly provides a visual positioning method and device. The visual positioning method of the embodiment of the application comprises the following steps: acquiring a first collected image; determining a first position and orientation according to the first image and the aerial photography model; judging whether a ground model corresponding to the first attitude exists in the open space model or not; when a ground model corresponding to the first position exists, determining a second position according to the ground model; the air-ground model comprises an aerial photography model and a ground model mapped into the aerial photography model, a coordinate system of the ground model is the same as that of the aerial photography model, and the positioning accuracy of the second pose is higher than that of the first pose. The embodiment of the application can improve the success rate and the positioning precision of visual positioning.)

1. A visual positioning method, comprising:

acquiring a first collected image;

determining a first position and orientation according to the first image and the aerial photography model;

judging whether a ground model corresponding to the first attitude exists in the open space model or not;

when a ground model corresponding to the first position exists, determining a second position according to the ground model;

the air-ground model comprises the aerial photography model and a ground model mapped into the aerial photography model, a coordinate system of the ground model is the same as that of the aerial photography model, and the positioning accuracy of the second pose is higher than that of the first pose.

2. The method of claim 1, wherein determining a first pose from the first image and an aerial model comprises:

determining an initial pose set according to the position information of the terminal equipment and the magnetometer angle deflection information corresponding to the first image;

acquiring the skyline and building line-plane semantic information of the first image according to the first image;

determining N initial poses in the initial pose set according to the skyline of the first image and the aerial photography model;

determining the first pose according to the building line and plane semantic information, the N initial poses and the aerial photography model;

wherein N is an integer greater than 1.

3. The method of claim 2, further comprising:

acquiring at least one collected second image, wherein the visual angles of the first image and the at least one second image are different;

determining optimized N initial poses according to the N initial poses, the skyline of the first image and the skyline of the at least one second image, and the relative poses between the first image and the at least one second image;

determining the first pose according to the building line-plane semantic information, the N initial poses, and the aerial photography model, including:

and determining the first pose according to the building line and plane semantic information, the optimized N initial poses and the aerial photography model.

4. The method of claim 3, further comprising:

and determining optimized N initial poses according to the N initial poses and the relative pose between the first image and the at least one second image.

5. The method of any of claims 2 to 4, wherein the set of initial poses comprises a plurality of sets of initial poses, each set of initial poses comprising initial position information and initial magnetometer angle deflection information, the initial position information belonging to a first threshold range determined from the position information of the terminal device, the initial magnetometer angle deflection information belonging to a second threshold range determined from the magnetometer angle deflection information of the terminal device.

6. The method of claim 5, wherein the center value of the first threshold range is position information of the terminal device, and wherein the center value of the second threshold range is magnetometer angle deflection information of the terminal device.

7. The method of claim 5 or 6, wherein determining N initial poses in the set of initial poses from the skyline of the first image and the aerial model comprises:

rendering skylines according to each group of initial poses and the aerial photography model respectively, and acquiring skylines corresponding to each group of initial poses;

respectively calculating the matching degree of the skyline corresponding to each group of initial poses and the skyline of the first image, and determining the matching degree of each group of initial poses;

and determining N initial poses in the initial pose set according to the matching degree of each group of initial poses, wherein the N initial poses are the first N initial poses in the initial pose set, and the matching degrees of the N initial poses are sequenced from large to small.

8. The method according to any one of claims 1 to 7, further comprising:

and constructing the air-ground model based on the plurality of third images for constructing the ground model and the aerial shooting model.

9. The method of claim 8, wherein constructing the space model based on the plurality of third images of the constructed ground model and the aerial photography model comprises:

determining the poses of the plurality of third images in the aerial photography model according to the aerial photography model;

determining the air-ground model according to the poses of the third images in the aerial model and the poses of the third images in the ground model.

10. The method of claim 9, wherein determining the air-to-ground model from the poses of the plurality of third images in the aerial model and the poses of the plurality of third images in the ground model comprises:

determining a plurality of coordinate transformation relations according to the poses of the plurality of third images in the aerial photography model and the poses of the plurality of third images in the ground model;

determining semantic reprojection errors of the third images in the aerial photography model according to the coordinate conversion relations, and selecting the optimal coordinate conversion relation from the coordinate conversion relations as the air-ground model;

and the optimal coordinate conversion relation is the coordinate conversion relation which enables the semantic re-projection error to be minimum.

11. The method according to any one of claims 1 to 10, further comprising:

determining description information of the virtual object according to the first pose or the second pose;

and sending the virtual object description information to a terminal device, wherein the virtual object description information is used for displaying a corresponding virtual object on the terminal device.

12. A visual positioning method, comprising:

acquiring a first image and displaying the first image on a user interface, wherein the first image comprises a shot skyline;

sending the first image to a server;

receiving first virtual object description information sent by the server, wherein the first virtual object description information is determined according to a first pose, and the first pose is determined according to the skyline and building line-plane semantic information of the first image and an aerial photography model;

and displaying the virtual object corresponding to the first virtual object description information on the user interface in an overlapping manner.

13. The method of claim 12, wherein prior to acquiring the first image, the method further comprises:

and displaying first prompt information on the user interface, wherein the first prompt information is used for prompting a user to shoot the skyline.

14. The method according to claim 12 or 13, characterized in that the method further comprises:

receiving an indication message sent by the server, wherein the indication message is used for indicating that a ground model corresponding to the first position exists in an air-ground model, the ground model is used for determining a second position, the air-ground model comprises an aerial photography model and a ground model mapped to the aerial photography model, and a coordinate system of the ground model is the same as that of the aerial photography model;

and displaying second prompt information on the user interface according to the indication message, wherein the second prompt information is used for prompting the user of the selectable operation modes.

15. The method of claim 14, further comprising:

receiving a repositioning instruction input by a user through the user interface or on a hardware button, and sending a positioning optimization request message to the server in response to the repositioning instruction, wherein the positioning optimization request message is used for requesting to calculate the second pose;

receiving second virtual object description information sent by the server, wherein the second virtual object description information is determined according to a second pose, the second pose is determined according to a ground model corresponding to the first pose, and the positioning accuracy of the second pose is higher than that of the first pose.

16. A visual positioning device, comprising:

the processing module is used for acquiring a first image acquired by the terminal equipment through the transceiving module;

the processing module is further used for determining a first position according to the first image and the aerial photography model;

the processing module is further used for judging whether a ground model corresponding to the first position exists in the air-ground model; when a ground model corresponding to the first position exists, determining a second position according to the ground model;

the air-ground model comprises the aerial photography model and a ground model mapped into the aerial photography model, a coordinate system of the ground model is the same as that of the aerial photography model, and the positioning accuracy of the second pose is higher than that of the first pose.

17. The apparatus of claim 16, wherein the processing module is configured to:

determining an initial pose set according to the position information of the terminal equipment and the magnetometer angle deflection information corresponding to the first image;

acquiring the skyline and building line-plane semantic information of the first image according to the first image;

determining N initial poses in the initial pose set according to the skyline of the first image and the aerial photography model;

determining the first pose according to the building line and plane semantic information, the N initial poses and the aerial photography model;

wherein N is an integer greater than 1.

18. The apparatus according to claim 17, wherein the processing module is further configured to obtain, through the transceiver module, at least one second image acquired by a terminal device, where a viewing angle of the first image is different from that of the at least one second image;

the processing module is further used for determining N optimized initial poses according to the N initial poses, the skyline of the first image and the skyline of the at least one second image, and the relative poses between the first image and the at least one second image;

and determining the first pose according to the building line and plane semantic information, the optimized N initial poses and the aerial photography model.

19. The apparatus of claim 18, wherein the processing module is further configured to: and determining optimized N initial poses according to the N initial poses and the relative pose between the first image and the at least one second image.

20. The apparatus of any of claims 17 to 19, wherein the set of initial poses comprises a plurality of sets of initial poses, each set of initial poses comprising initial position information and initial magnetometer angle deflection information, the initial position information belonging to a first threshold range determined from the position information of the terminal device, the initial magnetometer angle deflection information belonging to a second threshold range determined from the magnetometer angle deflection information of the terminal device.

21. The apparatus of claim 20, wherein the center value of the first threshold range is position information of the terminal device, and wherein the center value of the second threshold range is magnetometer angle deflection information of the terminal device.

22. The apparatus of claim 20 or 21, wherein the processing module is configured to: rendering skylines according to each group of initial poses and the aerial photography model respectively, and acquiring skylines corresponding to each group of initial poses; respectively calculating the matching degree of the skyline corresponding to each group of initial poses and the skyline of the first image, and determining the matching degree of each group of initial poses; and determining N initial poses in the initial pose set according to the matching degree of each group of initial poses, wherein the N initial poses are the first N initial poses in the initial pose set, and the matching degrees of the N initial poses are sequenced from large to small.

23. The apparatus of any one of claims 16 to 22, wherein the processing module is further configured to: and constructing the air-ground model based on the plurality of third images for constructing the ground model and the aerial shooting model.

24. The apparatus of claim 23, wherein the processing module is configured to: determining the poses of the plurality of third images in the aerial photography model according to the aerial photography model; determining the air-ground model according to the poses of the third images in the aerial model and the poses of the third images in the ground model.

25. The apparatus of claim 24, wherein the processing module is configured to: determining a plurality of coordinate transformation relations according to the poses of the plurality of third images in the aerial photography model and the poses of the plurality of third images in the ground model; determining semantic reprojection errors of the third images in the aerial photography model according to the coordinate conversion relations, and selecting the optimal coordinate conversion relation from the coordinate conversion relations as the air-ground model;

and the optimal coordinate conversion relation is the coordinate conversion relation which enables the semantic re-projection error to be minimum.

26. The apparatus of any one of claims 16 to 25, wherein the processing module is further configured to: determining virtual object description information according to the first pose or the second pose;

and sending the virtual object description information to a terminal device through the transceiver module, wherein the virtual object description information is used for displaying a corresponding virtual object on the terminal device.

27. A visual positioning device, comprising:

the system comprises a processing module, a display module and a display module, wherein the processing module is used for acquiring a first image and displaying the first image on a user interface, and the first image comprises a shot skyline;

the processing module is further used for sending the first image to a server through a transceiver module;

the transceiver module is further configured to receive first virtual object description information sent by the server, where the first virtual object description information is determined according to a first pose, and the first pose is determined according to the skyline and building lineside semantic information of the first image and an aerial photography model;

the processing module is further configured to display a virtual object corresponding to the first virtual object description information in an overlapping manner on the user interface.

28. The apparatus of claim 27, wherein the processing module is further configured to: before the first image is collected, first prompt information is displayed on the user interface, and the first prompt information is used for prompting a user to shoot a skyline.

29. The apparatus of claim 27 or 28, wherein the transceiver module is further configured to: receiving an indication message sent by the server, wherein the indication message is used for indicating that a ground model corresponding to the first position exists in an air-ground model, the ground model is used for determining a second position, the air-ground model comprises an aerial photography model and a ground model mapped to the aerial photography model, and a coordinate system of the ground model is the same as that of the aerial photography model;

the processing module is further configured to display second prompt information on the user interface according to the indication message, where the second prompt information is used to prompt a user of an operation mode that can be selected.

30. The apparatus of claim 29, wherein the processing module is further configured to: receiving a repositioning instruction input by a user through the user interface or a hardware button, and sending a positioning optimization request message to the server through the transceiver module in response to the repositioning instruction, wherein the positioning optimization request message is used for requesting to calculate the second pose;

the transceiver module is further configured to receive second virtual object description information sent by the server, where the second virtual object description information is determined according to a second pose, the second pose is determined according to the ground model corresponding to the first pose, and the positioning accuracy of the second pose is higher than that of the first pose.

Technical Field

The present disclosure relates to intelligent sensing technologies, and in particular, to a visual positioning method and apparatus.

Background

Visual localization is the use of images or video captured by a camera to pinpoint the location and pose of the camera in the real world. Visual positioning is a hot problem in the computer vision field in recent years, and has very important significance in various fields such as augmented reality, interactive virtual reality, robot visual navigation, public scene monitoring, intelligent transportation and the like.

The visual positioning technology comprises a visual positioning method based on an unmanned aerial vehicle basic map or a satellite map. The unmanned Aerial vehicle/satellite basic map (autonomous Model) is obtained by mainly performing oblique photography on a scene through an unmanned Aerial vehicle and performing Motion recovery Structure (SFM) three-dimensional reconstruction according to acquired data, or performing white mode reconstruction on the scene through a satellite. Based on the visual positioning method of the unmanned Aerial vehicle basic map or the satellite map, the unmanned Aerial vehicle/satellite basic map (aircraft Model) is used for positioning images or videos shot by the camera, and 6 degrees of freedom (DoF) poses (position) of the camera in the unmanned Aerial vehicle/satellite basic map are obtained. The visual positioning technology can be applied to the visual positioning of large-scale scenes.

The visual positioning method of the unmanned aerial vehicle basic map or the satellite map has the problems of low positioning success rate and low positioning precision.

Disclosure of Invention

The application provides a visual positioning method and device, which are used for avoiding resource waste and improving positioning success rate and positioning precision.

In a first aspect, an embodiment of the present application provides a visual positioning method, which may include: a first acquired image is acquired. And determining a first position and orientation according to the first image and the aerial photography model. And judging whether the ground model corresponding to the first attitude exists in the open ground model. And when the ground model corresponding to the first position exists, determining a second position according to the ground model. The air-ground model comprises an aerial photography model and a ground model mapped into the aerial photography model, the coordinate system of the ground model is the same as that of the aerial photography model, and the positioning accuracy of the second pose is higher than that of the first pose.

According to the implementation mode, the server determines the first pose according to the first image and the aerial photography model, and judges whether the ground model corresponding to the first pose exists in the air-ground model or not. When a ground model corresponding to the first pose exists, the second pose is determined according to the ground model, the first pose is determined based on the aerial photography model, the fast, efficient and applicable coarse positioning in a large-range scene can be achieved, the visual positioning requirements of county level, district level and city level are met, fine visual positioning is performed according to the first pose and based on the ground model, accordingly, layered visual positioning is achieved, and the positioning accuracy and the success rate of the visual positioning are improved.

In one possible design, determining the first pose from the first image and the aerial model may include: and determining an initial pose set according to the position information of the terminal equipment corresponding to the first image and the magnetometer angle deflection information. And acquiring the line-surface semantic information of the skyline and the building of the first image according to the first image. And determining N initial poses in the initial pose set according to the skyline of the first image and the aerial photography model. And determining the first pose according to the building line-surface semantic information, the N initial poses and the aerial photography model. Wherein N is an integer greater than 1.

This initial pose may also be referred to as a candidate pose.

In one possible design, the method may further include: and acquiring at least one acquired second image, wherein the shooting fields of the first image and the at least one second image have intersection. For example, the first image and the at least one second image have different viewing angles. And determining the optimized N initial poses according to the N initial poses, the skyline of the first image, the skyline of the at least one second image and the relative poses between the first image and the at least one second image. Determining the first pose according to the building line-plane semantic information, the N initial poses and the aerial photography model, wherein the determining comprises the following steps: and determining the first pose according to the line-surface semantic information of the building, the optimized N initial poses and the aerial photography model.

The relative pose between the first image and the at least one second image may be given by a Simultaneous Localization and Mapping (SLAM) algorithm.

In one possible design, the method may further include: and determining the optimized N initial poses according to the N initial poses and the relative pose between the first image and the at least one second image.

In one possible design, the initial pose set includes a plurality of sets of initial poses, each set of initial poses includes initial position information and initial magnetometer angle deflection information, the initial position information belongs to a first threshold range, the first threshold range is determined according to the position information of the terminal device, the initial magnetometer angle deflection information belongs to a second threshold range, and the second threshold range is determined according to the magnetometer angle deflection information of the terminal device.

In one possible design, the central value of the first threshold range is position information of the terminal device, and the central value of the second threshold range is magnetometer angle deflection information of the terminal device.

In one possible design, determining N initial poses in an initial pose set from the skyline of the first image and an aerial model comprises: and rendering the skyline according to each group of initial poses and the aerial photography model respectively, and acquiring the skyline corresponding to each group of initial poses. And respectively calculating the matching degree of the skyline corresponding to each group of initial poses and the skyline of the first image, and determining the matching degree of each group of initial poses. And determining N initial poses in an initial pose set according to the matching degree of each group of initial poses, wherein the N initial poses are the first N initial poses with the matching degree in the initial pose set in a descending order.

In one possible design, the method may further include: and constructing an air-ground model based on the plurality of third images for constructing the ground model and the aerial photography model.

The third image may include a skyline.

In one possible design, constructing the air-ground model based on the plurality of third images constructing the ground model and the aerial photography model may include: and determining the poses of the plurality of third images in the aerial photography model according to the aerial photography model. And determining the air-ground model according to the poses of the plurality of third images in the aerial photographing model and the poses of the plurality of third images in the ground model.

In one possible design, determining the air-ground model from the poses of the plurality of third images in the aerial model and the poses of the plurality of third images in the ground model includes: and determining various coordinate transformation relations according to the poses of the plurality of third images in the aerial photography model and the poses of the plurality of third images in the ground model. And determining semantic reprojection errors of the third images in the aerial photography model according to the multiple coordinate conversion relations, and selecting the optimal coordinate conversion relation from the multiple coordinate conversion relations as an air-ground model. The optimal coordinate conversion relation is the coordinate conversion relation which enables the semantic re-projection error to be minimum.

In one possible design, the method may further include: acquiring a plurality of third images and gravity information corresponding to each third image; and constructing the ground model according to the third images and the gravity information corresponding to each third image. The gravity information, which is used to acquire a roll angle and a pitch angle of a camera coordinate system, may be acquired by SLAM.

The constructing of the air-ground model based on the plurality of third images and the aerial photography model may include: and constructing an air-ground model according to the ground model and the aerial photography model.

In one possible design, the method may further include: and determining the description information of the virtual object according to the first pose or the second pose. And sending virtual object description information to the terminal equipment, wherein the first virtual object description information is used for displaying the corresponding virtual object on the terminal equipment.

In a second aspect, embodiments of the present application provide a visual positioning method, which may include: a first image is acquired and displayed on a user interface, the first image including a captured skyline. The first image is sent to a server. And receiving first virtual object description information sent by the server, wherein the first virtual object description information is determined according to a first pose, and the first pose is determined according to the skyline and building line-plane semantic information of the first image and the aerial photography model. And displaying the virtual object corresponding to the description information of the first virtual object in an overlaying manner on the user interface.

According to the implementation mode, the terminal device sends a first image to the server, receives first virtual object description information sent by the server, displays a virtual object corresponding to the first virtual object description information on a user interface, the first virtual object description information is determined according to a first pose, the first pose is determined according to the skyline and building lineside semantic information of the first image and an aerial photography model, and the positioning accuracy of the first pose is higher than that of a visual positioning method in the prior art, so that the virtual object displayed based on the first pose is more precise and accurate.

In one possible design, before acquiring the first image, the method may further include: and displaying first prompt information on the user interface, wherein the first prompt information is used for prompting the user to shoot the skyline.

In one possible design, the method further includes: receiving an indication message sent by a server, wherein the indication message is used for indicating that a ground model corresponding to a first position exists in an air-ground model, the ground model is used for determining a second position, the air-ground model comprises an aerial photography model and a ground model mapped into the aerial photography model, and a coordinate system of the ground model is the same as that of the aerial photography model. And displaying second prompt information on the user interface according to the indication message, wherein the second prompt information is used for prompting the user of the selectable operation modes.

According to the implementation mode, when the ground model corresponding to the first pose exists, the terminal equipment can display the prompt information of the ground model, so that the user can select whether to calculate the second pose, namely whether to perform more precise visual positioning, and the use requirements of different users are met.

In one possible design, the method further includes: and receiving a repositioning instruction input by a user through a user interface or on a hardware button, and sending a positioning optimization request message to the server in response to the repositioning instruction, wherein the positioning optimization request message is used for requesting to calculate the second pose. And receiving second virtual object description information sent by the server, wherein the second virtual object description information is determined according to a second pose, the second pose is determined according to the ground model corresponding to the first pose, and the positioning precision of the second pose is higher than that of the first pose.

In a third aspect, an embodiment of the present application provides an air-ground model modeling method, which may include: a plurality of third images of the constructed ground model are obtained. And determining a first pose of a plurality of third images for constructing the ground model in the aerial model. And aligning the aerial model and the ground model according to the first poses of the third images in the aerial model and the second poses of the third images in the ground model to acquire the air-ground model. The air-ground model comprises an aerial photography model and a ground model mapped into the aerial photography model, and the coordinate system of the ground model is the same as that of the aerial photography model.

The third image may include a skyline.

In one possible design, aligning the aerial model and the ground model according to the poses of the plurality of third images in the aerial model and the poses of the plurality of third images in the ground model, and acquiring the air-ground model, includes: and determining various coordinate transformation relations according to the poses of the plurality of third images in the aerial photography model and the poses of the plurality of third images in the ground model. And determining semantic reprojection errors of the third images in the aerial photography model according to the multiple coordinate conversion relations, selecting the optimal coordinate conversion relation from the multiple coordinate conversion relations as the coordinate conversion relation of the air-ground model, wherein the coordinate conversion relation of the air-ground model is used for aligning the aerial photography model and the ground model. And mapping the ground model into the aerial photography model according to the coordinate conversion relation of the air-ground model to obtain the air-ground model. The optimal coordinate conversion relation is the coordinate conversion relation which enables the semantic re-projection error to be minimum.

In one possible design, the method may further include: acquiring a plurality of third images and gravity information corresponding to each third image; and constructing the ground model according to the third images and the gravity information corresponding to each third image.

In a fourth aspect, an embodiment of the present application provides a visual positioning apparatus, which may be used as a server or an internal chip of the server, and is configured to perform the visual positioning method in the first aspect or any possible implementation manner of the first aspect. In particular, the visual positioning apparatus may comprise means or unit, e.g. a transceiver means or unit, a processing means or unit, for performing the visual positioning method of the first aspect or any possible implementation manner of the first aspect.

In a fifth aspect, the present application provides a visual positioning apparatus, which may be a server or an internal chip of a server, and includes a memory and a processor, where the memory is configured to store instructions, and the processor is configured to execute the instructions stored in the memory, and execute the instructions stored in the memory to cause the processor to perform the visual positioning method in the first aspect or any possible implementation manner of the first aspect.

In a sixth aspect, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, where the computer program is executed by a processor to implement the method in the first aspect or any possible implementation manner of the first aspect.

In a seventh aspect, an embodiment of the present application provides a visual positioning apparatus, which may be used as a terminal device, and is configured to execute the visual positioning method in the second aspect or any possible implementation manner of the second aspect. In particular, the visual positioning apparatus may comprise a module or unit, e.g. a transceiver module or unit, a processing module or unit, for performing the visual positioning method of the second aspect or any possible implementation manner of the second aspect.

In an eighth aspect, the present application provides a visual positioning apparatus, which may be used as a terminal device, and includes a memory and a processor, where the memory is used to store instructions, and the processor is used to execute the instructions stored in the memory, and the execution of the instructions stored in the memory causes the processor to execute the visual positioning method in the second aspect or any possible implementation manner of the second aspect.

In a ninth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method of the second aspect or any possible implementation manner of the second aspect.

In a tenth aspect, embodiments of the present application provide a visual positioning apparatus, which may be a server or an internal chip of the server, and is configured to perform the method for modeling an air-ground model in any possible implementation manner of the third aspect or the third aspect. In particular, the visual localization apparatus may comprise means or unit, e.g. an acquisition means or unit, a processing means or unit, for performing the method of modeling an air-ground model in the third aspect or any possible implementation manner of the third aspect.

In an eleventh aspect, the present application provides a visual positioning apparatus, which may be a server or an internal chip of a server, and includes a memory and a processor, where the memory is configured to store instructions, and the processor is configured to execute the instructions stored in the memory, and execute the instructions stored in the memory to cause the processor to perform the method for modeling an air-ground model in any possible implementation manner of the third aspect or the third aspect.

In a twelfth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method in the third aspect or any possible implementation manner of the third aspect.

In a thirteenth aspect, the present application provides a computer program product, which includes a computer program for executing the method of the first aspect or any possible implementation manner of the first aspect, or for executing the method of the second aspect or any possible implementation manner of the second aspect, or for executing the method of the third aspect or any possible implementation manner of the third aspect, when the computer program is executed by a computer or a processor.

In a fourteenth aspect, an embodiment of the present application provides a visual positioning method, which may include: acquiring a first image and a second image which are acquired. And determining an initial pose set according to the position information of the terminal equipment corresponding to the first image and the magnetometer angle deflection information. And acquiring the line-surface semantic information of the skyline and the building of the first image according to the first image. And acquiring the line-surface semantic information of the skyline and the building of the second image according to the second image. Relative poses between the first image and the second image are acquired based on the SLAM. And determining N optimized candidate poses in the initial pose set according to the skyline of the first image, the skyline of the second image, the relative poses and the aerial photographing model. And determining the first pose of the first image according to the building line-plane semantic information, the N optimized candidate poses and the aerial photography model, wherein N is an integer larger than 1.

In one possible design, the first image and the second image are at different viewing angles. The skyline may comprise a vegetation skyline. The building lineside semantic information may include top edge information of the building.

In one possible design, the set of initial poses includes a plurality of sets of initial poses, each set of initial poses includes initial position information and initial magnetometer angle deflection information, the initial position information falls within a first threshold range, the first threshold range is determined according to the position information of the terminal device, the initial magnetometer angle deflection information falls within a second threshold range, the second threshold range is determined according to the magnetometer angle deflection information of the terminal device.

In one possible design, the center value of the first threshold range is position information of the terminal device, and the center value of the second threshold range is magnetometer angle deflection information of the terminal device.

In one possible design, determining N optimized candidate poses in the initial set of poses from the skyline of the first image, the skyline of the second image, the relative poses, and the aerial model comprises: rendering skylines according to each group of initial poses and the aerial map respectively, and acquiring skylines corresponding to each group of initial poses; respectively calculating the matching degree of the skyline corresponding to each group of initial poses and the skyline of the first image; determining the weight of each group of initial poses according to the matching degree of the skyline, the skyline and the relative poses of the second image; and determining N optimized candidate poses in the initial pose set according to the weight of each group of initial poses, wherein the N optimized candidate poses are the first N poses in the initial pose set, and the weights of the N optimized candidate poses are sorted from small to large.

In one possible design, determining the first pose of the first image from the building line-plane semantic information, the N optimized candidate poses, and the aerial photography model may include: and respectively calculating semantic reprojection errors corresponding to the optimized candidate poses according to the N optimized candidate poses, the building line-surface semantic information of the first image and the aerial photography model, and selecting the first pose of the first image with the smallest reprojection error.

In one possible design, the method may further include: and judging whether the ground model corresponding to the first attitude exists in the open ground model. And when the ground model corresponding to the first position exists, determining a second position according to the ground model. The air-ground model comprises an aerial photography model and a ground model mapped into the aerial photography model, the coordinate system of the ground model is the same as that of the aerial photography model, and the positioning accuracy of the second pose is higher than that of the first pose.

In one possible design, the method may further include: when the ground model corresponding to the first position exists, determining the description information of the first virtual object according to the first position; and sending the first virtual object description information to the terminal equipment, wherein the first virtual object description information is used for displaying the corresponding virtual object on the terminal equipment.

In one possible design, the method may further include: when the ground model corresponding to the first position exists, determining second virtual object description information according to the second position; and sending the second virtual object description information to the terminal equipment, wherein the second virtual object description information is used for displaying the corresponding virtual object on the terminal equipment.

In a fifteenth aspect, an embodiment of the present application provides an air space model building method, which may include: acquiring a plurality of third images and gravity information corresponding to each third image; constructing a ground model according to the third image; and constructing an air-ground model according to the aerial photography model and the ground model. Wherein at least one of the plurality of third images comprises a skyline, the gravity information being obtainable by the SLAM, the gravity information being used for obtaining a roll angle and a pitch angle of the camera coordinate system.

The model may also be referred to as a map, e.g., an air-to-ground map, an aerial map, etc.

In one possible design, an air-ground model is constructed from an aerial model and a ground model, including: determining the pose of a third image containing a skyline in the plurality of third images in the aerial photography model according to the aerial photography model; and determining the space-ground model according to the poses of the third image containing the skyline in the plurality of third images in the aerial photography model and the poses of the third image containing the skyline in the plurality of third images in the ground model.

In one possible design, determining the air-ground model according to the pose of the third image containing the skyline in the aerial model in the third images and the pose of the third image containing the skyline in the ground model in the third images comprises: determining a plurality of coordinate transformation relations according to the poses of the third images containing the skyline in the aerial photography model and the poses of the third images containing the skyline in the ground model;

determining building line-surface semantic reprojection errors of the third images in the aerial photography model according to the multiple coordinate conversion relations, and selecting the optimal coordinate conversion relation from the multiple coordinate conversion relations; the optimal coordinate transformation relation is the coordinate transformation relation which enables the semantic re-projection error of the building line surface to be minimum.

Converting the coordinate system of the ground model into the coordinate system of the aerial photography model according to the optimal coordinate system conversion relation, and acquiring the air-ground model; the air-ground model comprises the aerial photography model and the ground model with a coordinate system mapped to the aerial photography model, and the coordinate system of the ground model is the same as that of the aerial photography map.

The visual positioning method and device provided by the embodiment of the application perform visual positioning based on the skyline and building line-plane semantic information of the first image and/or the ground model in the air-ground model, and can improve the success rate and accuracy of the visual positioning.

Drawings

FIG. 1 is a schematic view of an aerial photography model provided in an embodiment of the present application;

FIG. 2 is a schematic diagram of a ground model, an aerial photography model and an air-ground model provided by an embodiment of the present application;

fig. 3 is a schematic diagram of an application scenario provided in an embodiment of the present application;

fig. 4A is a schematic diagram of a user interface displayed on a screen of a terminal device according to an embodiment of the present application;

fig. 4B is a schematic diagram of a user interface displayed on a screen of a terminal device according to an embodiment of the present application;

fig. 4C is a schematic diagram of a user interface displayed on a screen of a terminal device according to an embodiment of the present application;

FIG. 5 is a flowchart of a visual positioning method according to an embodiment of the present application;

FIG. 6 is a flowchart of an improved Geo-localization method based on an aerial photography model according to an embodiment of the present disclosure

Fig. 7A is a semantic segmentation effect diagram provided in the embodiment of the present application;

FIG. 7B is a diagram illustrating another semantic segmentation effect provided by an embodiment of the present application;

FIG. 8 is a flow chart of another visual positioning method provided by embodiments of the present application;

FIG. 9 is a schematic diagram of a user interface provided by an embodiment of the present application;

fig. 10 is a flowchart of a method for modeling an air-ground model according to an embodiment of the present application;

FIG. 11 is a schematic view of a user interface provided by an embodiment of the present application;

FIG. 12 is a schematic diagram of modeling an air-ground model provided by an embodiment of the present application;

fig. 13 is a schematic structural diagram of a visual positioning apparatus according to an embodiment of the present application;

FIG. 14 is a schematic view of another embodiment of a visual positioning apparatus;

FIG. 15 is a schematic view of another embodiment of a visual positioning apparatus;

fig. 16 is a schematic structural diagram of another visual positioning apparatus according to an embodiment of the present application.

Detailed Description

The terms "first", "second", and the like, referred to in the embodiments of the present application, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, nor order. Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a list of steps or elements. A method, system, article, or apparatus is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not explicitly listed or inherent to such process, system, article, or apparatus.

It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.

First, several terms referred to in the embodiments of the present application will be explained:

visual Localization (Visual Localization): in a visual positioning system, the pose of a camera (camera) coordinate system of a terminal device is positioned in a real world coordinate system in order to seamlessly merge the real world with a virtual world.

Query (query) image: and the image collected by the terminal equipment is used for carrying out the current image frame of visual positioning.

Aerial Model (Aerial Model): also known as drone/satellite base maps, this aerial photography model can be obtained mainly by two ways: 1) the scene is shot obliquely by the unmanned aerial vehicle, and Motion recovery from Motion (SFM) three-dimensional reconstruction is performed according to data collected by shooting, as shown in fig. 1(a) and 2 (b). 2) The scene is reconstructed in white mode by the satellite, as shown in fig. 1 (b).

Ground Model (Ground Model): also called a map based on terminal device mapping, the terminal device is used to collect data of a scene, and SFM three-dimensional reconstruction is performed according to the collected data to obtain the ground model, which may be, for example, as shown in fig. 2 (a).

air-Ground Model (Aeral-group Model): the method can also be called as an air-Ground map, and the Aerial Model (aerological Model) and the Ground Model (Ground Model) are aligned by performing similarity transformation, and the two models are unified into a global coordinate system, as shown in fig. 2(c) and fig. 2(d), wherein fig. 2(c) is a point cloud of the air-Ground Model, and fig. 2(d) is a Reconstructed network (Reconstructed messages) of the point cloud based on the air-Ground Model.

Aerial Model (Aerial Model) based visual localization (Geo-localization): and based on an Aerial Model (Aeriol Model), positioning the 6-dof pose of the camera coordinate system of the terminal equipment in the Aerial Model.

Ground-Model based visual localization (Ground-localization): and based on a Ground Model (Ground Model), positioning a 6-dof pose of a camera coordinate system of the terminal equipment in the Ground Model.

6 degrees of freedom (DoF) Pose (dose): including (x, y, z) coordinates, and angular deflections about three axes, yaw (yaw), pitch (pitch), and roll (roll), respectively.

The embodiment of the application relates to terminal equipment. The terminal device may be a mobile phone, a tablet personal computer (tablet personal computer), a media player, a smart television, a notebook computer (laptop computer), a Personal Digital Assistant (PDA), a personal computer (personal computer), a smart watch, a wearable device (e.g., Augmented Reality (AR) glasses), a vehicle-mounted device, or an Internet of things (IOT) device, and the embodiment of the present application is not limited thereto.

Fig. 3 is a schematic diagram of an application scenario provided in an embodiment of the present application, and as shown in fig. 3, the application scenario may include a terminal device 11 and a server 12, for example, the terminal device 11 and the server 12 may communicate, the server 12 may provide a visual positioning service to the terminal device, and push virtual object description information to the terminal device 11 based on the visual positioning service, so that the terminal device may present a corresponding virtual object, where the virtual object may be a virtual road sign, a virtual character, and the like. The embodiment of the application provides a visual positioning method to improve the success rate and accuracy rate of visual positioning, so as to accurately push corresponding description information of a virtual object to a terminal device.

The visual positioning method can be applied to the fields of AR navigation, AR man-machine interaction, auxiliary driving, automatic driving and the like which need to position the position and the posture of the camera of the terminal equipment. For example, in a very large scene visual navigation system, visual navigation refers to guiding a user to a certain destination point through an interactive mode such as augmented reality. The user can see the information of the recommended walking direction, the distance from the destination and the like on the screen of the terminal device in real time, and as shown in fig. 4A, the virtual object is the walking direction of the J2-1-1B16 conference room displayed on the screen, that is, the walking direction and the like are shown to the user through augmented reality. For another example, as shown in fig. 4B and 4C, the AR game interaction may fix the AR content at a specific geographic location, the terminal device used by the user may display a corresponding virtual object (e.g., a virtual character shown in fig. 4B or a virtual animation shown in fig. 4C) on the screen through the visual positioning method according to the embodiment of the present application, and the user may interact with the virtual object by clicking/sliding the screen of the terminal device, so as to guide the virtual object to interact with the real world.

It should be noted that the terminal device 11 is generally provided with a camera, and the terminal device 11 can shoot a scene through the camera. The server 12 is exemplified by one server, but the present application is not limited thereto, and may be a server cluster including a plurality of servers, for example.

Fig. 5 is a flowchart of a visual positioning method provided in an embodiment of the present application, where the method of the present embodiment relates to a terminal device and a server, and as shown in fig. 5, the method of the present embodiment may include:

step 101, a terminal device collects a first image.

The terminal device acquires a first image through the camera, where the first image may be a query (query) image as described above.

Taking the terminal device as an example of a smart phone, the smart phone can start a shooting function according to the triggering of the application program, and acquire the first image. For example, the first image may be acquired periodically, for example, 2 seconds, 30 seconds, or the like, or the first image may be acquired when a preset acquisition condition is met, where the preset acquisition condition may be that the GPS data of the smartphone is within a preset range. Each first image acquired by the terminal equipment can pass through the following steps to realize visual positioning.

Step 102, the terminal device sends the first image to the server.

The server receives a first image sent by the terminal equipment.

And 103, determining a first position by the server according to the first image and the aerial photography model.

The mode of determining the first pose in the embodiment of the application can be called as improved Aerial Model (advanced Model) -based visual positioning (Geo-localization), and the improved Aerial Model (advanced Model) -based visual positioning (Geo-localization) can effectively combine with the skyline and building lineside semantic information of the first image to determine the first pose and improve the positioning success rate and the positioning accuracy.

For example, the server may determine N initial poses according to the skyline of the first image, and determine the first pose according to the building line-plane semantic information of the first image, the N initial poses, and the aerial photography model. For example, traversing the N initial poses, calculating semantic reprojection errors of the N initial poses, and determining a first pose according to the semantic reprojection errors. The semantic reprojection errors of the N initial poses may be calculated by rendering the edge and the surface of the building according to the N initial poses and the aerial photography model, respectively, to obtain a rendered semantic segmentation map, and calculating a matching error between the rendered semantic segmentation map and building line-surface semantic information (e.g., the semantic segmentation map) of the first image, where the matching error is the semantic reprojection error. N is an integer greater than 1.

According to the implementation mode, an initial pose set is determined according to the position information of the terminal device corresponding to the first image and the magnetometer angle deflection information. And acquiring the line-surface semantic information of the skyline and the building of the first image according to the first image. And determining N initial poses in the initial pose set according to the skyline of the first image and the aerial photography model. And determining the first pose according to the building line-surface semantic information, the N initial poses and the aerial photography model. Reference is made to the description of the embodiment shown in fig. 6 for a detailed implementation thereof.

In some embodiments, the server may further receive at least one second image acquired by the terminal device, and the server may optimize the N initial poses according to the at least one second image, determine the optimized N initial poses, and determine the first pose according to the building line-plane semantic information of the first image and the optimized N initial poses. Namely, the pose solution of the first image is assisted by combining the multi-frame images. The at least one second image intersects the field of view of the first image.

Optionally, there may also be no intersection between the at least one second image and the captured field of view of the first image. In other words, the at least one second image and the first image have different viewing angles.

And step 104, the server determines first virtual object description information according to the first attitude, and sends the first virtual object description information to the terminal equipment.

For example, the server may determine, from the first pose, first virtual object description information for displaying a corresponding virtual object on the terminal device, for example, a walking guide icon as shown in fig. 4A, which is displayed in an actual scene of the real world, i.e., on a street as shown in fig. 4A.

And 105, the terminal device displays the virtual object corresponding to the first virtual object description information on a user interface.

The terminal device displays the virtual object corresponding to the first virtual object description information on a user interface, wherein an actual scene of a real world is displayed in the user interface, and the virtual object can be displayed on the user interface in an augmented reality mode.

In this embodiment, a terminal device sends a first image to a server, the server determines a first pose according to skyline and building line-plane semantic information of the first image and an aerial photography model, the server determines first virtual object description information according to the first pose, the server sends the first virtual object description information to the terminal device, and the terminal device displays a virtual object corresponding to the first virtual object description information on a user interface, so as to determine the first pose based on the skyline and building line-plane semantic information of the first image, and thus success rate and accuracy of visual positioning can be improved.

Furthermore, the embodiment can effectively combine the respective advantages of the aerial map-based visual positioning and the mobile phone-based visual positioning for fine map building, and effectively relieve the contradiction between the acquisition cost and the positioning precision in a large scene by building the air-ground map and the layered visual positioning.

A specific implementation of the above step 103 is explained below by using the embodiment shown in fig. 6.

Fig. 6 is a flowchart of an improved Geo-localization (Geo-localization) method based on an aerial photography model according to an embodiment of the present disclosure, where an execution subject of the embodiment may be a server or an internal chip of the server, as shown in fig. 6, the method of the embodiment may include:

step 201, determining an initial pose set according to the position information of the terminal device corresponding to the first image and the magnetometer angle deflection information.

The position information of the terminal device corresponding to the first image may be Global Positioning System (GPS) information, and the magnetometer angle deflection information may be a yaw (yaw) angle. The position information and the magnetometer angle deflection information may be position information and magnetometer angle deflection information when the terminal device acquires the first image, and may be acquired through a wireless communication module and a magnetometer of the terminal device.

The set of initial poses may include a plurality of sets of initial poses, each set of initial poses may include initial position information and initial magnetometer angle deflection information, the initial position information falling within a first threshold range, the first threshold range being determined from position information of the terminal device, the initial magnetometer angle deflection information falling within a second threshold range, the second threshold range being determined from magnetometer angle deflection information of the terminal device.

For example, the terminal device may construct a position candidate set (T) and a yaw (yaw) angle candidate set (Y) respectively according to the position information and magnetometer angle deflection information of the terminal device corresponding to the first image, where the position candidate set (T) includes a plurality of initial position information, the yaw (yaw) angle candidate set (Y) includes a plurality of yaw (yaw) angles, and one initial position information in T and one yaw (yaw) angle in Y may form a set of initial poses, so that a plurality of sets of initial poses may be formed.

One way to implement the construction of the location candidate set (T) is to select location points as initial location information in the location candidate set (T) at intervals of a first preset interval within an area range, where the area range may be a range in which the location information (x, y) of the terminal device corresponding to the first image is used as a center of a circle and a radius is a first threshold. Namely, the central value of the first threshold range is the position information of the terminal device. For example, the first threshold may be 30 meters, 35 meters, etc. The first preset interval may be 1 meter.

One way of constructing the set of yaw (yaw) angle candidates (Y) is to select an angle as the yaw (yaw) angle in the set of yaw (yaw) angle candidates (Y) at intervals of a second preset interval within an angle range, which may be a range of plus or minus a second threshold value of the yaw (yaw) angle of the terminal device corresponding to the first image. That is, the central value of the second threshold range is magnetometer angle deflection information of the terminal device. For example, the second threshold may be 90 degrees, 85 degrees, etc. The second preset interval may be 0.1 degrees.

The above-mentioned implementation manner for constructing the position candidate set (T) and the yaw (yaw) angle candidate set (Y) is an illustration, and the embodiments of the present application are not limited thereto.

Step 202, acquiring the skyline and building line-plane semantic information of the first image according to the first image.

In this step, semantic segmentation of different classes may be performed on the first image, and a skyline of the first image may be extracted. The different categories may include vegetation, buildings, sky, etc. The semantic segmentation of the building horizontal and vertical line surfaces can be carried out on the first image, and the building line surface semantic information of the first image is obtained. The building horizontal and vertical planes include the edges (horizontal and vertical) and planes of the building.

For example, taking fig. 7A as an example, a first image (e.g., the leftmost image of fig. 7A) is input to a first semantic segmentation network, which is configured to distinguish buildings, sky, vegetation, ground, and the like, a semantic segmentation effect map (e.g., the middle image of fig. 7A) is output, and a skyline (e.g., the rightmost image of fig. 7A) of the first image is obtained by extracting a skyline based on the semantic segmentation effect map.

The first semantic segmentation network may be any neural network, such as a convolutional neural network or the like.

The first semantic segmentation network may be obtained after training using training data, i.e., training the first semantic segmentation network using the training data to distinguish buildings, sky, vegetation, ground, and the like. The semantic segmentation task is a dense pixel-level classification task, the training strategy adopted during training is standard cross entropy loss for measuring the difference between a predicted value and a label value, and the prediction effect of the network is improved by minimizing the loss:

wherein N represents all pixels, piA prediction value, p, representing that any pixel prediction and label (ground route) is in the same categoryjRepresenting the predicted value of each category of any pixel. The semantic segmentation network of the embodiment of the application calculates Loss (Loss) of two parts, wherein the first part is the cross entropy L of the final output and the label graphfinalI.e., L in the above equation, the second component is the regularization loss Lweight. The embodiment of the application relieves overfitting by reducing the weight of the characteristic or punishing unimportant characteristic, and regularization helps to punish the characteristic weightThe weight of the heavy, i.e. feature, also becomes part of the loss function of the model. Thus, the overall Loss (Loss) of the semantic segmentation network is as follows:

Ltotal=Lfinal+γLweight

where γ is a hyperparameter used to control the degree of importance of the loss. For example, the value of γ is set to 1.

The first semantic segmentation network is trained by iteratively adjusting the semantic segmentation network model repeatedly to minimize the overall Loss (Loss) of the semantic segmentation network.

Illustratively, taking fig. 7B as an example, a first image (e.g., the leftmost image of fig. 7B) is input to a second semantic segmentation network for distinguishing building horizontal lines, vertical lines, building planes, etc. in the image, and building line-plane semantic information, e.g., a building semantic segmentation map (e.g., the rightmost image of fig. 7B), is output.

The second semantic segmentation network may be any neural network, such as a convolutional neural network or the like.

The second semantic segmentation network may be obtained after training using training data, i.e., the training data is used to train the second semantic segmentation network to distinguish building horizontal lines, vertical lines, building surfaces, and the like. The specific training mode may adopt a training mode similar to the first semantic segmentation network, and details are not repeated here.

It should be noted that, after the skyline of the first image is acquired as described above, the skyline may be adjusted, for example, rotated in the gravity direction by a certain angle, which may be determined by using a Simultaneous Localization and Mapping (SLAM) algorithm, for example, an angle given by the SLAM algorithm, and the angle is used to represent a relative relationship between a camera coordinate system of the terminal device and the gravity direction.

And step 203, determining N initial poses in the initial pose set according to the skyline and the aerial photography model of the first image.

For example, the elements in the position candidate set (T) and the yaw (yaw) angle candidate set (Y) may be traversed to form a plurality of sets of initial poses in the initial pose set, each set of initial poses may include initial position information and initial magnetometer angle deflection information. And aiming at each group of initial poses, rendering skylines according to an aerial photography model, acquiring skylines corresponding to each group of initial poses, respectively calculating the matching degree of the skylines corresponding to each group of initial poses and the skylines of the first image, determining the matching degree of each group of initial poses, and determining N initial poses in an initial pose set according to the matching degree of each group of initial poses, wherein the N initial poses can be N initial poses with higher matching degree in the initial pose set.

The specific implementation manner of calculating the matching degree between the skyline corresponding to each group of initial poses and the skyline of the first image may be as follows: and matching the skyline corresponding to the rendered initial pose with the skyline of the first image in a sliding window mode (L2 distance or other distance measurement), and determining the matching degree.

According to the embodiment of the application, N initial poses are adopted to participate in the visual positioning, so that the positioning success rate of the visual positioning can be improved.

Illustratively, the set of initial poses is ((x)1,y1) Yaw1) based on the aerial photography model, rendering the skyline of the initial pose, acquiring the skyline corresponding to the initial pose, matching the skyline corresponding to the initial pose with the skyline of the first image in a sliding window mode, and determining the matching degree of the initial pose.

And 204, determining the optimized N initial poses according to the N initial poses, the skyline of the first image, the skyline of the at least one second image and the relative pose between the first image and the at least one second image.

After step 203 is performed, step 205 may be performed directly to determine a first pose from the building lineside semantic information, the N initial poses, and the aerial photography model. Step 204 is an optional step, and after step 203 is executed, multi-frame joint optimization may be performed on the N initial poses determined in step 203 to obtain N optimized initial poses.

Multi-frame associationOne way of achieving optimization may be: the N initial poses of step 203 are optimized in conjunction with at least one second image. The explanation of the at least one second image may refer to the explanation of step 103 in the embodiment shown in fig. 5, and is not repeated here. The present embodiment is exemplified by two second images. With I0Representing a first image, I1And I2Two second images are represented. For example, optimizing I based on the skyline of the three images, and the relative pose between the three images given by SLAM0N initial poses.

The relative pose relationship of the three images can be given by SLAM and is recorded asIs represented by1And I2Relative to I0The pose conversion relationship. Can use I1And I2Auxiliary I0And (4) solving the pose.

Illustratively, in order to determine the first pose of the first image more accurately, the three images are taken with a certain intersection of the visual fields, and the overall visual field formed by the three images is larger. The optimization method may specifically be:

the I obtained in step 2030N initial poses ofBased onCalculated to obtain I1And I2Pose in aerial photography model according to I1Position and attitude of2The pose of the object is rendered to obtain an object I1Rendered skyline and I2The rendered skyline of (1), calculate I1Rendered skyline and I1The degree of matching of the skyline of (1), calculate I2Rendered skyline and I2The degree of matching of the skyline of (1) is0、I1And I2The matching degrees of the skyline are added together and are recorded asIs used for measuringThe accuracy of the estimation of. For example, if n is 3, the correspondingHighest, then corresponding initial poseIs superior to other initial poses.

For I1And I2The same processing as described above is performed, for example, if I is obtained1N initial poses ofThe first one is1Can be directed to I1Obtaining the I by adopting the above-mentioned manner from step 201 to step 2031The number of N initial poses of the gesture,is represented by0And I2Relative to I1The position and posture conversion relation of the electronic device,can be given by SLAM based onCalculated to obtain I0And I2Pose in aerial photography model according to I0Position and attitude of2The pose of the object is rendered to obtain an object I0Rendered skyline and I2The rendered skyline of (1), calculate I0Rendered skyline and I0The degree of matching of the skyline of (1), calculate I2Rendered skyline and I2Then calculating I0And I2In this postureDegree of matching of skyline, and finally, I0、I1And I2Are added together and are recorded asIs used for measuringThe accuracy of the estimation of. For example, if n is 3, the correspondingHighest, then corresponding initial poseIs superior to other initial poses.

According toThe numerical ordering of (1) inSelecting N positions, and obtaining the optimized coordinate system according to the coordinate system conversion relation given by SLAMHere, Poptini represents an optimized initial pose (optimized initial).

And step 205, determining a first pose according to the line-surface semantic information of the building, the optimized N initial poses and the aerial photography model.

Traversing the optimized N initial poses obtained in the last step, respectively rendering the line and surface semantic information of the building with the N initial poses based on an aerial photography model, calculating the semantic re-projection error of each initial pose according to the rendered line and surface semantic information of the building and the line and surface semantic information of the building obtained in the step 202, selecting the pose with the minimum semantic re-projection error, and obtaining the first image I03-dof pose. Combining the relation between the camera coordinate system and the gravity direction given by SLAM to obtainObtaining information of three other degrees of freedom of the first image to obtain a first image I0The 6dof pose (6 dof pose with respect to the world coordinate system), i.e., the first pose described above.

Alternatively, it is also possible to solve for a more precise pose by PnL (perceptual-n-Line), i.e. to optimize the first image I as described above using the following steps0And the optimized pose is taken as a first pose in the 6dof pose.

For example, the first image I is optimized in particular0The step of determining the first pose may be:

a. according to the current pose (the first image I above)06dof pose) of the building, 3D segment information (e.g., a horizontal and vertical segment of a building line) at the current view angle is extracted from the aerial photography model;

b. inputting the 3D line segment information into an PnL algorithm, and outputting an optimized pose;

c. according to the optimized pose, a semantic segmentation graph is rendered on the corresponding aerial photography model through re-projection, and the matching error of the graph and the semantic segmentation graph corresponding to the image is calculated; repeating the step b and the step c until the matching error is converged to obtain a finer 6-dof pose;

d. and randomly sampling a plurality of poses near the pose obtained by solving, repeating the steps a-c, and updating the poses if the calculated poses are more optimal (taking semantic reprojection matching errors of the aerial photography model and the mobile phone photographed image as a measurement standard), so as to avoid the optimized poses of the steps from falling into local optimum as much as possible.

And step 206, determining the optimized first pose according to the first pose and the relative pose between the first image and the at least one second image.

Illustratively, according to the relative pose (also called interframe) between the first image and at least one second image given by the SLAM, the PoseGraph optimization is solved to obtain a first image I0The best 6dof pose estimation, i.e. the optimized first pose.

In this embodiment, N initial poses are determined according to the skyline of the first image, and the first pose is obtained by optimization on the basis of the N initial poses according to the building line-plane semantic information of the first image, so that the skyline of the first image and the building line-plane semantic information can be effectively combined, and the positioning success rate and the positioning accuracy of visual positioning are improved.

In the visual positioning process, the pose can be optimized and the positioning accuracy can be improved by combining the skyline of at least one second image and the line-surface semantic information of the building.

Fig. 8 is a flowchart of another visual positioning method provided in an embodiment of the present application, where the method of the present embodiment relates to a terminal device and a server, and the present embodiment further optimizes a first pose of a first image by combining an air-to-ground model on the basis of the embodiment shown in fig. 5, so as to implement more accurate visual positioning, as shown in fig. 8, the method of the present embodiment may include:

step 301, the terminal device collects a first image.

Step 302, the terminal device sends the first image to the server.

Step 303, the server determines a first position according to the first image and the aerial photography model.

Step 304, the server determines first virtual object description information according to the first attitude, and sends the first virtual object description information to the terminal device.

And 305, displaying the virtual object corresponding to the first virtual object description information on a user interface by the terminal device.

For the explanation of step 301 to step 305, refer to step 101 to step 105 in the embodiment shown in fig. 5, which is not described herein again.

Step 306, the server judges whether the ground model corresponding to the first attitude exists in the open space model. If yes, go to step 307.

The aerial photography model comprises an aerial photography model and a ground model mapped into the aerial photography model, and the coordinate system of the ground model in the aerial photography model is the same as that of the aerial photography model. The specific construction method of the air-ground model can be referred to the following specific explanation of the embodiment shown in fig. 10.

And 307, when the ground model corresponding to the first position exists, determining a second position according to the ground model.

And the positioning precision of the second pose is higher than that of the first pose.

And refined visual positioning can be carried out through the ground model corresponding to the first pose, and the second pose is determined. The refined visual positioning can comprise image retrieval, feature point extraction, feature point matching and other processing procedures.

And step 308, the server determines second virtual object description information according to the second posture, and sends the second virtual object description information to the terminal equipment.

For example, the server may determine, according to the second pose, first virtual object description information for displaying a corresponding virtual object on the terminal device, for example, a guide icon of a cafe as shown in fig. 4A, which is displayed in an actual scene of the real world, i.e., on a building as shown in fig. 4A.

The second virtual object description information is determined based on the refined second pose as compared to the first virtual object description information, and the virtual object corresponding to the second virtual object description information may be a more detailed virtual object, for example, the virtual object corresponding to the first virtual object description information may be a road guide icon, and the virtual object corresponding to the second virtual object description information may be a guide icon of a shop in a street.

Step 309, the terminal device displays the virtual object corresponding to the second virtual object description information on the user interface.

The terminal device displays a virtual object corresponding to the first virtual object description information and a virtual object corresponding to the second virtual object description information on a user interface, an actual scene of a real world is displayed in the user interface, and the virtual object can be displayed on the user interface in an augmented reality mode.

In this embodiment, the server determines whether a ground model corresponding to the first pose exists in the open-air model, when the ground model corresponding to the first pose exists, the second pose is determined according to the ground model, the server determines second virtual object description information according to the second pose, the second virtual object description information is sent to the terminal device, the terminal device displays a virtual object corresponding to the second virtual object description information on a user interface, the success rate and the accuracy of visual positioning can be improved, and the server pushes the virtual object description information to the terminal device with accuracy.

The following describes, by way of specific example, the visual positioning method of the embodiment shown in fig. 8 with reference to fig. 9.

Fig. 9 is a schematic view of a user interface provided in an embodiment of the present application. As shown in fig. 9, user interface 901-user interface 904 are included.

As shown in the user interface 901, the terminal device may capture a first image, which is presented in the user interface 901.

Optionally, a first prompt message for prompting the user to shoot the skyline may be further displayed in the user interface 901, for example, the first prompt message may be "guarantee shooting of the skyline".

The first image in the user interface 901 includes the skyline, so that the visual positioning requirement can be satisfied. The terminal device may send the first image to the server via step 302 described above. The server may determine the first pose through steps 303 to 304, and send the terminal device the first virtual object description information corresponding to the first pose. The terminal device may display a user interface 902 according to the first virtual object description information, and a virtual object, for example, a cloud, corresponding to the first virtual object description information is presented in the user interface 902.

The server may further determine whether the ground model corresponding to the first position exists in the air-ground model according to step 306, and when the ground model corresponding to the first position exists, may send an indication message to the terminal device, where the indication message is used to indicate that the ground model corresponding to the first position exists in the air-ground model, and the terminal device may display, according to the indication message, second prompt information on the user interface, where the second prompt information is used to prompt the user about an alternative operation mode, for example, please refer to the user interface 903, and the second prompt information is "whether further positioning is needed" and operation icons "yes" and "no".

The user can click the 'yes' operation icon, the terminal device sends a positioning optimization request message to the server according to the operation of the user, and the positioning optimization request message is used for requesting to calculate the second pose. The server determines the second posture through steps 307 and 308, and sends second virtual object description information to the terminal device, and the terminal device presents a virtual object corresponding to the second virtual object description information on the user interface, for example, the user interface 904, and the virtual object corresponding to the first virtual object description information, for example, the cloud, and the virtual object corresponding to the second virtual object description information, for example, the sun and the lightning are presented in the user interface 904.

The server according to the embodiments of the present application performs operations on two sides, namely, on-line visual positioning calculation, including solution of the first pose and the second pose, as described in the above embodiments. Another aspect is off-line air space map construction, which can be seen in fig. 10 below. The offline air space map construction mainly refers to: the method comprises the steps that a server side obtains a plurality of images uploaded by a terminal device and used for building a ground model, first poses of the images in an aerial photography model are determined through an improved Geo-localization visual positioning algorithm, meanwhile, the images are subjected to SFM operation to build the ground model and obtain the poses of the images in the ground model, the first poses obtained in the aerial photography model through the images and corresponding poses in the corresponding ground model are aligned to the aerial photography model and the ground model through semantic re-projection errors, and therefore the aerial photography model and the ground model are obtained.

Fig. 10 is a flowchart of a method for modeling an air-ground model according to an embodiment of the present application, and as shown in fig. 10, the method may include:

step 401, obtaining a plurality of images for constructing a ground model.

Illustratively, in a local area, a user acquires an image by using a terminal device and uploads the image to a server, and the server performs 3D modeling based on an SFM algorithm to obtain a ground model. I.e. the image is used to construct the ground model. The image needs to be acquired to the skyline. During the process of acquiring the image, the user may be prompted for the shooting requirement of the image through the prompt information on the user interface as shown in fig. 11.

For example, as shown in fig. 12, the example of the upper image of the first column of each row is an image for constructing the ground model. A point cloud of the ground model constructed based on the image may be as shown in the second column.

And 402, determining the poses of a plurality of images for constructing the ground model in the aerial photography model.

The aerial photography model may be obtained by firstly performing image acquisition on an application scene by using an unmanned aerial vehicle/satellite, and then performing 2.5D model construction based on oblique photography. For example, as shown in fig. 12 for an example of the lower image of the first column of each row, i.e., the image captured by one drone/satellite, for constructing an aerial model, the point cloud of the constructed aerial model may be as shown in the third column.

The embodiment can determine the pose of the image in the aerial photography model through an improved Geo-localization (Geo-localization) method based on the aerial photography model as shown in fig. 6, and specifically, replace the first image in the method shown in fig. 6 with each image in step 401 of the embodiment, so as to determine the pose of each image in the aerial photography model.

And step 403, aligning the aerial photography model and the ground model according to the poses of the images in the aerial photography model and the poses of the images in the ground model to obtain the air-ground model.

The air-ground model comprises an aerial photography model and a ground model mapped into the aerial photography model, and the coordinate system of the ground model in the air-ground model is the same as that of the aerial photography model.

For example, the point cloud of the constructed air-ground model may be as shown in the image of the fourth column of each row shown in fig. 12, i.e., the point cloud of the ground model and the point cloud of the aerial model are fused. The point cloud based on the space model may result in a reconstructed network as shown in the fifth column of each row shown in fig. 12.

In one implementation, a plurality of coordinate transformation relationships are determined based on the poses of the plurality of images in the aerial model and the poses of the plurality of images in the ground model. And determining semantic reprojection errors of the images in the aerial photography model according to the multiple coordinate conversion relations, selecting the optimal coordinate conversion relation from the multiple coordinate conversion relations as the coordinate conversion relation of the air-ground model, wherein the coordinate conversion relation of the air-ground model is used for aligning the aerial photography model and the ground model. And mapping the ground model into the aerial photography model according to the coordinate conversion relation of the air-ground model to obtain the air-ground model. The optimal coordinate conversion relation is the coordinate conversion relation which enables the semantic re-projection error to be minimum. In other words, the ground model is registered in the aerial photography model in the above mode, and the air-ground model is obtained.

The specific implementation of aligning the aerial photography model and the ground model may be:

assume that a set of images for constructing a ground model based on SFM isI.e. a total of M images are involved in the reconstruction. The pose of each image in the ground model is recorded as

Through improved Geo-localization, obtainPose in aerial photography model

By similarity transformation based onAndthe coordinate system conversion relation between the ground model and the aerial photography model can be obtained for each image

Go throughFor example, based onComputingSemantic reprojection errors in the aerial photography model, wherein the semantic reprojection errors refer to errors of horizontal and vertical planes of a building, and the semantic reprojection errors are accumulated to obtainGo throughObtaining different semantic reprojection errors

According toTo obtainRe-projecting and rendering a semantic segmentation map on the corresponding 2.5D aerial photography model according to the pose in the aerial photography model after conversion, and rendering the semantic segmentation map and the rendered semantic segmentation mapDetermining the projection error.

GetCorresponding to the minimum and the meanNamely the coordinate system transformation relation of the optimal air-ground model.

In the embodiment, the ground model is mapped into the aerial photography model to construct the air-ground model, the air-ground model is a hierarchical model, the information of a large-scale scene of the aerial photography model and the refined information of the ground model are fused, so that the visual positioning method using the air-ground model can perform quick, efficient and applicable coarse positioning on the large-scale scene to meet the visual positioning requirements of county and district level and city level, and the refined visual positioning is performed based on the result of the coarse positioning, thereby realizing the hierarchical visual positioning and improving the accuracy of the visual positioning.

The embodiment of the present application further provides a visual positioning apparatus, which is used for executing the method steps executed by the server or the processor of the server in the above method embodiments. As shown in fig. 13, the visual positioning device may include: a transceiver module 131 and a processing module 132.

The processing module 132 is configured to obtain the first image acquired by the terminal device through the transceiver module 131.

The processing module 132 is further configured to determine a first pose based on the first image and the aerial model. Judging whether a ground model corresponding to the first attitude exists in the open space model or not; and when the ground model corresponding to the first position exists, determining a second position according to the ground model.

The air-ground model comprises an aerial photography model and a ground model mapped into the aerial photography model, the coordinate system of the ground model in the air-ground model is the same as that of the aerial photography model, and the positioning accuracy of the second pose is higher than that of the first pose.

In some embodiments, the processing module 132 is configured to: and determining an initial pose set according to the position information of the terminal equipment corresponding to the first image and the magnetometer angle deflection information. And acquiring the line-surface semantic information of the skyline and the building of the first image according to the first image. And determining N initial poses in the initial pose set according to the skyline of the first image and the aerial photography model. And determining the first pose according to the building line-surface semantic information, the N initial poses and the aerial photography model. Wherein N is an integer greater than 1.

In some embodiments, the processing module 132 is further configured to acquire, through the transceiver module 131, at least one second image acquired by the terminal device, where the shooting fields of the first image and the at least one second image have an intersection. The processing module 132 is further configured to determine N optimized initial poses according to the N initial poses, the skyline of the first image and the skyline of the at least one second image, and the relative poses between the first image and the at least one second image. And determining the first pose according to the building line and plane semantic information, the optimized N initial poses and the aerial photography model.

In some embodiments, the processing module 132 is further configured to: and determining the optimized N initial poses according to the N initial poses and the relative pose between the first image and the at least one second image.

In some embodiments, the set of initial poses includes a plurality of sets of initial poses, each set of initial poses including initial position information and initial magnetometer angle deflection information, the initial position information falling within a first threshold range, the first threshold range being determined from the position information of the terminal device, the initial magnetometer angle deflection information falling within a second threshold range, the second threshold range being determined from the magnetometer angle deflection information of the terminal device.

In some embodiments, the center value of the first threshold range is the location information of the terminal device and the center value of the second threshold range is the magnetometer angle deflection information of the terminal device.

In some embodiments, the processing module 132 is configured to: rendering skylines according to each group of initial poses and the aerial photography model respectively, and acquiring skylines corresponding to each group of initial poses; respectively calculating the matching degree of the skyline corresponding to each group of initial poses and the skyline of the first image, and determining the matching degree of each group of initial poses; and determining N initial poses in the initial pose set according to the matching degree of each group of initial poses, wherein the N initial poses are the first N initial poses with the matching degree in the initial pose set in a descending order.

In some embodiments, the processing module 132 is further configured to: and constructing an air-ground model based on the plurality of third images for constructing the ground model and the aerial photography model.

In some embodiments, the processing module 132 is configured to: determining the poses of the plurality of third images in the aerial photography model according to the aerial photography model; and determining the air-ground model according to the poses of the plurality of third images in the aerial photographing model and the poses of the plurality of third images in the ground model.

In some embodiments, the processing module 132 is configured to: determining various coordinate transformation relations according to the poses of the plurality of third images in the aerial photography model and the poses of the plurality of third images in the ground model; and determining semantic reprojection errors of the third images in the aerial photography model according to the multiple coordinate conversion relations, and selecting the optimal coordinate conversion relation from the multiple coordinate conversion relations as an air-ground model. The optimal coordinate conversion relation is the coordinate conversion relation which enables the semantic re-projection error to be minimum.

In some embodiments, the processing module 132 is further configured to: and determining the description information of the first virtual object according to the first position. The first virtual object description information is sent to the terminal device through the transceiving module 131, and the first virtual object description information is used for displaying a corresponding virtual object on the terminal device.

In some embodiments, the processing module 132 is further configured to: and determining second virtual object description information according to the second position. The second virtual object description information is sent to the terminal device through the transceiving module 131, and the second virtual object description information is used for displaying the corresponding virtual object on the terminal device.

The visual positioning device provided in the embodiment of the present application can be used to execute the visual positioning method, and the content and effect thereof can refer to the method part, which is not described in detail in the embodiment of the present application.

Embodiments of the present application further provide a visual positioning apparatus, as shown in fig. 14, the visual positioning apparatus includes a processor 1401 and a transmission interface 1402, where the transmission interface 1402 is used to acquire a first acquired image.

The transmission Interface 1402 may include a transmission Interface and a reception Interface, and for example, the transmission Interface 1402 may be any type of Interface according to any proprietary or standardized Interface protocol, such as a High Definition Multimedia Interface (HDMI), a Mobile Industry Processor Interface (MIPI), a MIPI (MIPI) standardized Display Serial Interface (DSI), a VESA (Video Electronics Standards Association) standardized Embedded Display Port (eDP), Display Port (DP), or a V-By-One Interface, which is a digital Interface standard developed for image transmission, and various wired or wireless interfaces, optical interfaces, and the like.

The processor 1401 is configured to call program instructions stored in the memory to execute the visual positioning method according to the above embodiment of the method, and the content and effect of the method may refer to method parts, which are not described in detail in this embodiment of the application. Optionally, the apparatus further comprises a memory 1403. The processor 1402 may be a single core processor or a group of multi-core processors, the transmission interface 1402 is an interface for receiving or sending data, and the data processed by the visual positioning apparatus may include audio data, video data, or image data. Illustratively, the visual positioning device may be a processor chip.

Further embodiments of the present application also provide a computer storage medium, which may include computer instructions, and when the computer instructions are executed on an electronic device, the electronic device may be caused to perform the steps performed by the server in the above method embodiments.

Further embodiments of the present application also provide a computer program product, which when run on a computer causes the computer to perform the steps performed by the server in the above-mentioned method embodiments.

Other embodiments of the present application further provide an apparatus, where the apparatus has a function of implementing a server behavior in the foregoing method embodiments. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, for example, an acquisition unit or module, a determination unit or module.

The embodiment of the present application further provides a visual positioning apparatus, which is used for executing the method steps executed by the terminal device or the processor of the terminal device in the above method embodiments. As shown in fig. 15, the visual positioning device may include: a processing module 151 and a transceiver module 152.

And a processing module 151, configured to acquire a first image and display the first image on a user interface, where the first image includes a photographed skyline.

The processing module 151 is further configured to send the first image to the server through the transceiver module 152.

The transceiver module 152 is further configured to receive first virtual object description information sent by the server, where the first virtual object description information is determined according to a first pose, and the first pose is determined according to the skyline and building lineside semantic information of the first image and the aerial photography model.

The processing module 151 is further configured to display a virtual object corresponding to the first virtual object description information in an overlapping manner on the user interface.

In some embodiments, the processing module 151 is further configured to: before the first image is collected, first prompt information is displayed on a user interface, and the first prompt information is used for prompting a user to shoot a skyline.

In some embodiments, the transceiver module 152 is further configured to: and receiving an indication message sent by the server, wherein the indication message is used for indicating that a ground model corresponding to the first position exists in the space model, the ground model is used for determining the second position, the space model comprises an aerial photography model and a ground model mapped into the aerial photography model, and the coordinate system of the ground model is the same as that of the aerial photography model. The processing module 151 is further configured to display a second prompting message on the user interface according to the indication message, where the second prompting message is used to prompt the user of the operation modes that can be selected.

In some embodiments, the processing module 151 is further configured to: receiving a repositioning instruction input by a user through a user interface or on a hardware button, and sending a positioning optimization request message to the server through the transceiver module 152 in response to the repositioning instruction, wherein the positioning optimization request message is used for requesting to calculate the second pose. The transceiver module 152 is further configured to receive second virtual object description information sent by the server, where the second virtual object description information is determined according to a second pose determined according to the ground model corresponding to the first pose, and the positioning accuracy of the second pose is higher than that of the first pose.

The visual positioning device provided in the embodiment of the present application can be used to execute the visual positioning method, and the content and effect thereof can refer to the method part, which is not described in detail in the embodiment of the present application.

Fig. 16 is a schematic structural diagram of a visual processing apparatus according to an embodiment of the present application. As shown in fig. 16, the vision processing apparatus 1600 may be the terminal device involved in the above embodiments. The visual processing device 1600 includes a processor 1601 and a transceiver 1602.

Optionally, the vision processing apparatus 1600 further includes a memory 1603. The processor 1601, the transceiver 1602 and the memory 1603 may communicate with each other via an internal connection path to transmit a control signal and/or a data signal.

Among them, the memory 1603 is used for storing computer programs. The processor 1601 is configured to execute the computer program stored in the memory 1603, thereby implementing each function in the above-described apparatus embodiments.

Alternatively, the memory 1603 may be integrated in the processor 1601 or separate from the processor 1601.

Optionally, the vision processing apparatus 1600 can also include an antenna 1604 for transmitting signals output by the transceiver 1602. Alternatively, the transceiver 1602 receives signals through an antenna.

Optionally, the vision processing apparatus 1600 may also include a power supply 1605 for providing power to various devices or circuits in the terminal device.

In addition, in order to further improve the functions of the terminal device, the vision processing apparatus 1600 may further include one or more of an input unit 1606, a display unit 1607 (which may also be considered as an output unit), an audio circuit 1608, a camera 1609, a sensor 1610, and the like. The audio circuitry may also include a speaker 16081, a microphone 16082, etc., which are not described in detail.

Further embodiments of the present application further provide a computer storage medium, which may include computer instructions, and when the computer instructions are executed on an electronic device, the electronic device may be configured to perform the steps performed by the terminal device in the foregoing method embodiments.

Further embodiments of the present application further provide a computer program product, which when run on a computer, causes the computer to perform the steps performed by the terminal device in the above method embodiments.

Other embodiments of the present application further provide an apparatus, where the apparatus has a function of implementing a behavior of a terminal device in the foregoing method embodiments. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, for example, an acquisition unit or module, a transmission unit or module, and a display unit or module.

The embodiment of the present application further provides an air-ground model modeling apparatus, which is used for executing the method steps of the embodiment shown in fig. 10. The air space model modeling apparatus may include: the device comprises an acquisition module and a processing module.

The acquisition module is used for acquiring a plurality of images for constructing the ground model.

And the processing module is used for determining the poses of the plurality of images for constructing the ground model in the aerial photography model.

And the processing module is further used for aligning the aerial photography model and the ground model according to the poses of the images in the aerial photography model and the poses of the images in the ground model so as to obtain an air-ground model.

The air-ground model comprises an aerial photography model and a ground model mapped into the aerial photography model, and the coordinate system of the ground model in the air-ground model is the same as that of the aerial photography model.

In some embodiments, the processing module is to: determining various coordinate transformation relations according to the poses of the images in the aerial photography model and the poses of the images in the ground model; determining semantic reprojection errors of the images in the aerial photography model according to the multiple coordinate conversion relations, selecting the optimal coordinate conversion relation from the multiple coordinate conversion relations as the coordinate conversion relation of the air-ground model, wherein the coordinate conversion relation of the air-ground model is used for aligning the aerial photography model and the ground model; mapping the ground model into an aerial photography model according to the coordinate conversion relation of the air-ground model to obtain the air-ground model; the optimal coordinate conversion relation is the coordinate conversion relation which enables the semantic re-projection error to be minimum.

The visual positioning apparatus provided in the embodiment of the present application may be used to perform the method steps shown in fig. 10, and the content and effect of the method steps may refer to the method part, which is not described in detail in the embodiment of the present application.

After the air-ground model modeling device obtains the air-ground model, the air-ground model can be configured into a corresponding server, and the server provides a visual positioning function service for the terminal equipment.

The processor mentioned in the above embodiments may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware encoding processor, or implemented by a combination of hardware and software modules in the encoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.

The memory referred to in the various embodiments above may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (personal computer, server, network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

39页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种建筑结构物垂直度定量评价方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!