Based on the real-time automatic 3D modeling method taken pictures

文档序号:1744937 发布日期:2019-11-26 浏览:32次 中文

阅读说明:本技术 基于拍照的实时自动3d建模方法 (Based on the real-time automatic 3D modeling method taken pictures ) 是由 赵明 向中正 蔡锫 于 2019-08-23 设计创作,主要内容包括:本发明公开了一种基于拍照的实时自动3D建模方法,包括:S1)将移动设备和相机固定在同一拍摄支架上;S2)在移动支架的过程中获取多张相机或移动设备的图像,获取每个拍摄点的位置和朝向,形成使用统一坐标系的路线图;S3)在移动设备上实时对每个拍摄点的相机照片进行3D建模;S4)将每个拍摄点的3D模型按照S2中获取的位置和朝向,统一放置在同一个三维坐标系内,对多个拍摄点3D模型之间的结合部位进行拼接处理,形成包含多拍摄点的整体3D模型;S5)在移动设备上自动生成全景漫游效果。本发明能够支持多种拍摄设备,自动精确记录各个拍摄点的相对位置和相机的镜头朝向,自动生成可在内部漫游的3D模型以及2D平面图。(The invention discloses a kind of based on the real-time automatic 3D modeling method taken pictures, comprising: S1) mobile device and camera are fixed on same shooting bracket;S2 the image that multiple cameras or mobile device) are obtained during mobile bracket, obtains the position and orientation of each shooting point, forms the route map for using unified coordinate system;S3 3D modeling) is carried out to the camera photos of each shooting point in real time on the mobile apparatus;S4 the 3D model of each shooting point is uniformly placed in the same three-dimensional system of coordinate according to the position and orientation obtained in S2), splicing is carried out to the binding site between multiple shooting point 3D models, forms the whole 3D model comprising more shooting points;S5 Panoramic Warping effect) is automatically generated on the mobile apparatus.The present invention can support a variety of capture apparatus, and automatic and accurate records the relative position of each shooting point and the camera lens direction of camera, and automatically generating can be in the 3D model and 2D plan view of inside roaming.)

1. a kind of based on the real-time automatic 3D modeling method taken pictures, which comprises the steps of:

S1) by with camera function mobile device and camera be fixed on same shooting bracket;

S2 the image of multiple cameras or mobile device, the biography of combining camera or mobile device) are obtained during mobile bracket Sensor obtains the position and orientation of each shooting point, forms the route map for using unified coordinate system;

S3 3D modeling) is carried out to the camera photos of each shooting point in real time on the mobile apparatus;

S4 it) on the mobile apparatus by the 3D model of each shooting point according to the position and orientation obtained in S2, is uniformly placed on same In one three-dimensional system of coordinate, splicing is carried out to the binding site between multiple shooting point 3D models, is formed comprising more shootings The whole 3D model of point;

S5 Panoramic Warping effect) is automatically generated on the mobile apparatus.

2. as described in claim 1 based on the real-time automatic 3D modeling method taken pictures, which is characterized in that the step S2 is base In the positioning system of mobile device, using the image of mobile device, Feature Points Matching is carried out by close shooting point photo to look for To the relative displacement of shooting point, the route map in the same coordinate system including all shooting points is formed, and each shooting is provided The position and direction of point.

3. as described in claim 1 based on the real-time automatic 3D modeling method taken pictures, which is characterized in that the step S2 is base In the positioning system of camera, using the image of camera, Feature Points Matching is carried out by close shooting point photo to find shooting point Relative displacement, and formed include all shooting points the route map in the same coordinate system, and provide the position of each shooting point It sets and direction.

4. as claimed in claim 2 or claim 3 based on the real-time automatic 3D modeling method taken pictures, which is characterized in that the step S2 It further include the sensor by mobile device or camera, obtaining includes mobile acceleration, speed and directional information, to route map It is corrected.

5. as claimed in claim 2 based on the real-time automatic 3D modeling method taken pictures, which is characterized in that the step S2 is also wrapped The angle for obtaining camera lens and mobile phone direction is included, run the positioning system based on mobile phone simultaneously in initial phase and is based on phase The positioning system of machine, and mobile bracket a distance, two systems respectively provide a motion vector, the angle of two vectors at this time The as angle of camera lens and mobile phone;Either pass through handmade rotary camera preview figure or the photo of shooting, specified and mobile phone Towards consistent angle;Or the preview graph of mobile phone and camera or the photo of shooting are matched by image recognition algorithm, find folder Angle;Or by using additional mark include increasing the installation direction of scale and mobile phone on bracket to form fixed angle, so The mark is identified in camera preview figure or photo afterwards to calculate the angle of camera lens and mobile phone direction.

6. as described in claim 1 based on the real-time automatic 3D modeling method taken pictures, which is characterized in that the step S3 packet It includes:

S31 it) is based on deep learning technology, identifies the floor in photo, ceiling, the image of wall and roof parts;

S32) be based on image processing techniques, piecemeal carried out to the image that identifies, it is a plane that every piece, which is approximately considered, floor and The image block of ceiling portion is located at horizontal plane, and the image block of wall part is located at vertical plane, solves each plane equation Generate 3D model;Two planes intersected in photo are made to succeed in one's scheme using an intersection of the two planes as constraint condition The error of obtained intersection and the intersection actually observed is minimum.

7. as described in claim 1 based on the real-time automatic 3D modeling method taken pictures, which is characterized in that the step S3 is also wrapped It includes: for indoor photo, using the corner in computer vision technique identification photo, and corner being connected as room Simple model.

8. as described in claim 1 based on the real-time automatic 3D modeling method taken pictures, which is characterized in that in the step S3 also 2D plan view is generated including such as under type:

S33) each of 3D model is projected to downwards in 2D top view plane, and these projections are merged into more than one Side shape;

S34) obtained polygon is corrected and is simplified, the vertex including only retaining polygon on 2D plan view is deleted small Bumps;

S35) obtained polygon is corrected and is simplified, including for indoor scene, is detected using computer vision technique Straight line in picture, and then determine the trend of wall, and less parallel will be moved towards with wall or vertical side be all adapted to it is corresponding Direction on;

S36) the position of recognitiion gate, the method including using deep learning, the position of recognitiion gate and size on panorama sketch, or The position of door is determined according to the motion track of camera in entire shooting process and the intersection point of room contour.

9. as described in claim 1 based on the real-time automatic 3D modeling method taken pictures, which is characterized in that the step S4 packet It includes:

S41) according to the position and orientation of each shooting point, a transition matrix is obtained, the 3D model and 2D of single shooting point The local coordinate of plan view is converted into global world coordinates, to obtain the whole 3D model and 2D plane of all shooting points Figure;

S42 unified correction) is carried out to the 2D plan view of multiple shooting points, including using the method for statistics to all shooting points Wall line direction carries out unified correction, so that wall line of all rooms in certain deviation range is parallel;

S43) the 2D plan view of multiple shooting points is uniformly processed, automatically removes the part of overlapping, and carry out to cavity It fills up.

10. as claimed in claim 9 based on the real-time automatic 3D modeling method taken pictures, which is characterized in that further include allowing hand Work is audited and is edited to the result of shooting, and provides audit and the edit tool of relevant 2D plan view and 3D model.

Technical field

The present invention relates to a kind of 3D modeling methods more particularly to a kind of based on the real-time automatic 3D modeling method taken pictures.

Background technique

The technical problem to be solved by the present invention is to a kind of based on the three-dimensional space Real-time modeling set scheme taken pictures, and this method can For the 3D modeling and 2D plan view in single space or more spaces generate, comprising shooting hardware, shooting software, edit-modify software and Cloud shows a whole set of flow system of software composition.

Traditionally, following two is had based on the modeling method taken pictures, but had the shortcomings that obvious:

Conventional method a) directly generates 3D model using the camera that can identify depth information.This method is dependent on more Add complicated hardware, leads to higher equipment cost, generally requiring professional photographer could operate, and be unfavorable for popularizing;

Two photos of place shooting that conventional method b) is closer in a shooting point, preferably by Centimeter Level and decimetre Grade is simultaneously carried out continuously matching and positioning, then uses MVS (Multi View Stereo, with reference to https: //github.com/ Cdcseacave/openMVS it) is modeled, advantage is that whole process does not need manual intervention automatically, but disadvantage is also very bright It is aobvious:

Disadvantage 1: calculation amount is larger, can not Real-time modeling set on the mobile apparatus, usually upload onto the server in (cloud/PC), Using stronger computing capability, modeling algorithm is run;

Disadvantage 2: can not provide the specification for being specifically spaced and how far shooting, and hell to pay is used if too close, is taken long time;Such as Fruit only leans on two-by-two as it can be seen that by feel, can not prompt to user when may model failure, and shoot.

For the disadvantage more than overcoming, present invention uses the methods of innovation, that is, use deep learning and image processing method Method carries out single shooting point modeling, can under the limited computing capability of mobile device real time execution;It is only right in order to improve real-time Room contour modeling, the object models such as non-restoring furniture, ornaments;By constructing real-time positioning system, the mould of multiple shooting points Type is placed in the same coordinate system according to position and direction;Processing is optimized to the independent model of multiple shooting points, handles knot well Part is closed, entirety 3D model, 2D plan view and Panoramic Warping effect are generated;Allow that the result of shooting is audited and compiled by hand Volume.

The present invention supports that style of shooting is extensive, including but not limited to mobile phone fish eye lens, panorama camera, with fish-eye Camera and regular handset and ordinary digital camera, it is low in cost.

Common photo (definition): using ordinary digital camera (including common single-lens reflex camera, micro- list, Point&Shoot camera etc.), Panorama camera, the photo with fish-eye camera, regular handset and with fish-eye mobile phone and camera acquisition.Area The two photo recovery three-dimensional informations that can not be shot by same shooting point in binocular vision, common photo.Common photo Hereinafter referred to as photo.

When using panorama camera, usually obtain be panorama sketch (format of panorama sketch and definition can refer to https: // Baike.baidu.com/item/360 degree panorama sketch/8060867).Partial computer vision, image algorithm, such as straight line inspection It surveys, needs to be converted to panorama sketch into indeformable picture.Photo used below and picture statement include that distant view photograph and warp turn The indeformable picture changed.

Summary of the invention

Technical problem to be solved by the invention is to provide a kind of based on the real-time automatic 3D modeling method taken pictures, Neng Gouzhi A variety of capture apparatus are held, automatic and accurate records the relative position of each shooting point and the camera lens direction of camera, and automatically generating can be The 3D model and 2D plan view of inside roaming.

The present invention is to solve above-mentioned technical problem and the technical solution adopted is that provide a kind of real-time automatic based on what is taken pictures 3D modeling method, includes the following steps: S1) by with camera function mobile device and camera be fixed on same shooting bracket On;S2 the image of multiple cameras or mobile device, the sensing of combining camera or mobile device) are obtained during mobile bracket Device obtains the position and orientation of each shooting point, forms the route map for using unified coordinate system;S3) on the mobile apparatus in real time 3D modeling is carried out to the camera photos of each shooting point;S4) on the mobile apparatus by the 3D model of each shooting point according in S2 The position and orientation of acquisition are uniformly placed in the same three-dimensional system of coordinate, to the engaging portion between multiple shooting point 3D models Position carries out splicing, forms the whole 3D model comprising more shooting points;S5 Panoramic Warping effect) is automatically generated on the mobile apparatus Fruit.

Further, the step S2 is that mobile device-based positioning system using the image of mobile device passes through phase Nearly shooting point photo carries out Feature Points Matching to find the relative displacement of shooting point, and being formed includes all shooting points in same seat Route map in mark system, and the position and direction of each shooting point are provided.

Further, the step S2 is the positioning system based on camera, using the image of camera, passes through close shooting point Photo carries out Feature Points Matching to find the relative displacement of shooting point, and being formed includes all shooting points in the same coordinate system Route map, and provide the position and direction of each shooting point.

Further, the step S2 further includes the sensor by mobile device or camera, and obtaining includes mobile add Speed, speed and directional information, are corrected route map.

Further, the step S2 further includes the angle for obtaining camera lens and mobile phone direction, same in initial phase Positioning system of the Shi Yunhang based on mobile phone and the positioning system based on camera, and mobile bracket a distance, at this time two systems A motion vector is respectively provided, the angle of two vectors is the angle of camera lens and mobile phone;Or pass through handmade rotary phase Machine preview graph or the photo of shooting, are specified and mobile phone is towards consistent angle;Or mobile phone is matched by image recognition algorithm With the preview graph of camera or the photo of shooting, angle is found;Or by using additional mark include on bracket increase carve The installation direction of degree and mobile phone forms fixed angle, identifies the mark in camera preview figure or photo then to calculate camera mirror The angle of head and mobile phone direction.

Further, the step S3 includes: S31) be based on deep learning technology, identify photo in floor, ceiling, The image of wall and floor part;S32 it) is based on image processing techniques, piecemeal is carried out to the image identified, every piece is approximately considered It is a plane, the image block of floor and ceiling portion is located at horizontal plane, and the image block of wall part is located at vertical plane, It solves each plane equation and generates 3D model;For two planes intersected in photo, it is with a friendship of the two planes Constraint condition, so that the error of the intersection being calculated and the intersection actually observed is minimum.

Further, the step S3 further include: for indoor photo, using in computer vision technique identification photo Corner, and corner is connected into the simple model as room.

Further, further include that under type such as generates 2D plan view: S33 in the step S3) by each face of 3D model It projects in 2D plane, and these projections is merged into a polygon;S34) obtained polygon is corrected and Simplify, small bumps are deleted on the vertex including only retaining polygon on 2D plan view;S35) obtained polygon is corrected And simplification, including for indoor scene, using the straight line in computer vision technique detection picture, and then determine walking for wall To, and less parallel will be moved towards with wall or vertical side is all adapted on corresponding direction;S36) the position of recognitiion gate, including Using the method for deep learning, the position of recognitiion gate and size on panorama sketch, or according to camera in entire shooting process The intersection point of motion track and room contour determines the position of door.

Further, the step S4 includes: S41) according to the position and orientation of each shooting point, obtain a conversion square Battle array is converted into the 3D model of single shooting point and the local coordinate of 2D plan view global world coordinates, to obtain institute There are the whole 3D model and 2D plan view of point;S42 unified correction) is carried out to the 2D plan view of multiple shooting points, including is used The method of statistics carries out unified correction to the wall line direction of all shooting points, so that all rooms are in certain deviation range Wall line is parallel;S43) the 2D plan view of multiple shooting points is uniformly processed, automatically removes the part of overlapping, and to cavity It is filled up.

It further, further include allowing that the result of shooting is audited and edited by hand, and it is flat to provide relevant 2D The audit of face figure and 3D model and edit tool.

The present invention, which compares the prior art, to be had following the utility model has the advantages that provided by the invention built based on the real-time automatic 3D to take pictures Mould method can support a variety of capture apparatus, real time execution, automatic and accurate can remember under the limited computing capability of mobile device The relative position of each shooting point and the camera lens direction of camera are recorded, automatically generating can be in the 3D model and 2D plane of inside roaming Figure.In addition, the present invention models success rate height;Each room is supported only to shoot a photo, high-efficient, user experience is good;In real time Property is good, in shooting process Real-time modeling set;What You See Is What You Get, user are referred to Real-time modeling set result selection shooting point, prevent Leak-stopping is clapped;Modeling does not include furniture, is conducive to generate correct floor plan.

Detailed description of the invention

Fig. 1 is that the present invention is based on the real-time automatic 3D modeling method flow schematic diagrams taken pictures.

Specific embodiment

The invention will be further described with reference to the accompanying drawings and examples.

Fig. 1 is that the present invention is based on the real-time automatic 3D modeling method flow schematic diagrams taken pictures.

Referring to Figure 1, provided by the invention based on the real-time automatic 3D modeling method taken pictures, include the following steps:

S1) by mobile device (including mobile phone, tablet computer etc.) and camera (including panorama, flake with camera function And ordinary digital camera) be fixed on same shooting bracket (including tripod).

S2 the image that multiple cameras or mobile device) are obtained during mobile bracket, passes through image processing algorithm, knot The sensor of camera or mobile device is closed to obtain the position and orientation of each shooting point, forms the route for using unified coordinate system Figure.

S3) camera photos of each shooting point are carried out by deep learning algorithm or other methods on the mobile apparatus 3D modeling obtains the 3D model and 2D plan view of each shooting point in real time.

S4 unified to place) on the mobile apparatus by the 3D model of each shooting point according to the position and orientation obtained in S2 In the same three-dimensional system of coordinate, splicing is carried out to the binding site between multiple shooting point 3D models, forms multiple bats 3D model 2D plan view a little is taken the photograph, unified correction is carried out to the direction of all room walls and overlapping and empty situation are optimized Processing.Most rooms are made of parallel wall in normal house type, and original parallel in the room model that single shooting point is formed Wall will be a certain deviation (not parallel);By considering the direction of multiple room walls, a main flow direction is found out, and institute is adjusted with this There is the direction of room wall.

S5 Panoramic Warping effect) is automatically generated on the mobile apparatus.

One, hardware system

The present invention claims mobile phone and camera are all fixed on the same bracket (including tripod).

Two, system initialization

The present invention records the camera site of each shooting point and the direction of camera using one of following two method:

Method one) positioning system based on mobile phone, that is, the image (photo, video or preview graph) of mobile phone is used, phase is passed through Nearly shooting point photo carries out Feature Points Matching to find the displacement of shooting point, and it is preferable to use the sensor of mobile device (packets Include gyroscope Gyroscope, accelerometer Accelerometer, compass Compass etc.) it is corrected, route is generated with this Figure, and the position and direction of shooting point are provided;

Method two) positioning system based on camera, that is, the image (photo, video or preview graph) of camera is used, phase is passed through Nearly shooting point photo carries out Feature Points Matching to find the displacement of shooting point, preferably interval Centimeter Level and decimeter grade and continuously into Row matching and positioning, and it is preferable to use the sensor of camera (including gyroscope Gyroscope, accelerometer Accelerometer, compass Compass etc.) it is corrected, route map is generated with this, and provide the position and side of shooting point To.

Two methods comparison: method one is based on cell phone system, since there are many sensors for mobile phone, can generally provide more Accurate absolute coordinate information, can measure the absolute distance between shooting point, but using preceding needing additional initialization procedure.

Method two can only provide the relative coordinate of camera site, but not since camera does not often have perfect sensor Need additional initialization with the reference axis of align to path and single shooting point 3D model;In addition, if existing in shooting path Winding, the error of coordinate that method two provides are smaller.

For the moment, the coordinate that mobile phone provides is that (a general axis is directed toward and ground the coordinate system based on mobile phone itself to application method The vertical direction in face, another two axis are respectively directed to front and back and left and right directions), and the coordinate of the 3D model generated based on distant view photograph System is the coordinate system based on camera, and the reference axis of the two is not overlapped, in order to solve this problem, need to system carry out manually or Automatically initialization, can be using method manually or automatically:

Manual: user increases scale using additional measuring tool or in the equipment such as bracket, manually enters camera mirror The angle of head and mobile phone direction;

Automatic: initial phase operation method one and method two, and mobile device a distance simultaneously preferably move 1-3 Rice, two systems can respectively provide the motion vector of a system at this time, and the angle of two vectors is camera lens and mobile phone Angle.

Three, the determination of shooting point position and orientation

After above system brings into operation, the position and orientation information where photographer can be provided in real time.

Four, single shooting point 3D model generates

Traditionally, following two is had based on the modeling method taken pictures, but had the shortcomings that obvious:

Conventional method a) directly generates 3D model using the camera that can identify depth information.This method is dependent on more Add complicated hardware, leads to higher equipment cost, generally requiring professional photographer could operate, and be unfavorable for popularizing;

Two photos of place shooting that conventional method b) is closer in a shooting point, preferably by Centimeter Level and decimetre Grade is simultaneously carried out continuously matching and positioning, then uses MVS (Multi View Stereo, with reference to https: //github.com/ Cdcseacave/openMVS it) is modeled, advantage is that whole process does not need manual intervention automatically, but disadvantage is also very bright It is aobvious:

Disadvantage 1: calculation amount is larger, can not Real-time modeling set on the mobile apparatus, usually upload onto the server in (cloud/PC), Using stronger computing capability, modeling algorithm is run;

Disadvantage 2: can not provide the specification for being specifically spaced and how far shooting, and hell to pay is used if too close, is taken long time;Such as Fruit only leans on two-by-two as it can be seen that by feel, can not prompt to user when may model failure, and shoot.

For the disadvantage more than overcoming, present invention uses the methods of innovation: in order to improve real-time, reach seen in i.e. institute The effect obtained usually only models room contour (wall position), the mould without removing the attached objects in rooms such as recovery furniture, ornaments Type.That is:

I. it is based on deep learning technology, identifies the floor in photo, ceiling, the parts such as wall and roof, these parts The plane or normal direction at place determine (floor, ceiling) or normal in the horizontal plane (wall);

Ii. it is based on image processing techniques, piecemeal is carried out to image, it is a plane that every piece, which can be approximately considered,.For floor Partial block, plane equation are known.Assuming that y-axis is vertically upward, then the equation of floor part is y+1=0.For wall portion Point, plane equation Ax+Cz+D=0, ceiling portion y+D=0, other parts Ax+By+Cz+D=0 generate 3D model Process be the process of each plane equation of solving.For two planes intersected in picture, there is a friendship in picture Line, as constraint condition, the process of above-mentioned solution equation can become a minimization problem, so that for two of intersection The error of plane, the intersection being calculated and the intersection actually observed is minimum;

Iii. other methods can also be used to model scene.Such as indoors, it can use computer vision technique It identifies the corner in figure, corner is connected into the simple model as room.

Iv.2D plan view generates, and after obtaining the 3D model of each shooting point, can further generate plan view, especially Application to indoor scene, many times requires plan view.Method is:

1. each face of 3D model is projected in 2D top view plane;

2. these projections are merged into a big polygon;

3. a pair obtained polygon is corrected and simplifies, such as:

A) quantity that the polygon obtained is often put is more, can simplify to polygon, only retains on 2D plan view Small bumps are deleted on the vertex of polygon;

B) it for indoor scene, can use the straight line in computer vision technique detection picture, and then determine wall Trend, will move towards less parallel with wall or vertical side is all adapted on corresponding direction.

4. identification.For indoor scene, needs to be labeled on the door on 2D plan view, following two can be used Method:

A) method for directly using deep learning, the position of recognitiion gate and size on panorama sketch;

B) position and orientation of each shooting point are known since the positioning system based on mobile phone or camera not only provides, also have The position of the intersection point of the profile in the motion track of camera in standby entire shooting process, this path and room itself necessarily door.

Five, multiple shooting point 3D models and 2D plan view generate

A) the 3D model that step 4 solves each shooting point generates, and obtained 3D model coordinate is relative to shooting point Relative coordinate.In order to which these models couplings get up, to generate complete 3D model and 2D plan view.Firstly, since Known the position and orientation of each shooting point, an available transition matrix is converted into the local coordinate of single model Global world coordinates.

B) on this basis, further model and plan view can be corrected.

I. the model of single each shooting point has used straight line to be corrected, and usually there is error, has taken multiple points Afterwards, unified correction can be carried out to all shooting points with the method for statistics, for example, using RANSEC (Random Sample Consensus the methods of), find most reasonable correction straight line so that wall line of all rooms in certain deviation range it is parallel, Avoid the occurrence of small misalignment angle;

Ii. due to the error of modeling, the 3D model and 2D plan view of multiple shooting points are put together there may be overlapping, empty Situations such as hole, can automatically remove the part of overlapping, fill up etc. on 2D plan view to cavity.

Six, it shows immediately

The above process can carry out automatically completely on mobile phone, after the completion, can be used and show software on mobile phone immediately It is shown, roaming etc., and cloud can be uploaded to and be shared with other people.

Seven, human-edited

Since positioning system, single shooting point 3D modeling algorithm and more shooting point 3D models/2D plan view optimize each link There may be errors, and the model of higher precision, the present invention allow user to edit by hand to the result of shooting in order to obtain, And provide audit and edit tool.

Although the present invention is disclosed as above with preferred embodiment, however, it is not to limit the invention, any this field skill Art personnel, without departing from the spirit and scope of the present invention, when can make a little modification and perfect therefore of the invention protection model It encloses to work as and subject to the definition of the claims.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种数字孪生系统、方法及计算机设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类