VR panorama construction display method, system and terminal based on three-dimensional model

文档序号:1954875 发布日期:2021-12-10 浏览:14次 中文

阅读说明:本技术 基于三维模型的vr全景图构造显示方法、系统及终端 (VR panorama construction display method, system and terminal based on three-dimensional model ) 是由 朱明� 李渴 赵见 袁松 李�杰 徐益飞 肖春红 邱瑞成 黎宇阳 何其桧 牛秋晨 于 2021-11-09 设计创作,主要内容包括:本发明公开了基于三维模型的VR全景图构造显示方法、系统及终端,涉及图像处理技术领域,其技术方案要点是:在三维模型创建原点;获取多个相互关联的视角方向,得到视角序列;从三维模型中截取原始视图,得到第一视图集;根据视角序列获取的经纬度坐标信息对第一视图集中相应的原始视图进行预裁剪,得到第二视图集;将第二视图集中的裁剪视图依据相应的经纬度坐标信息进行VR全景拼接,得到全景图;将全景图上传全景平台后自动生成360度展示的VR全景。本发明创造性的将三维模型与VR技术相结合,针对复杂程度高、覆盖范围广的待处理目标,可直接从相应三维模型中截取一系列图像素材,并结合全景图拼接技术快速实现全景图的展示。(The invention discloses a method, a system and a terminal for displaying a VR panorama construction based on a three-dimensional model, which relate to the technical field of image processing, and have the technical scheme that: creating an origin point on the three-dimensional model; acquiring a plurality of mutually associated view angle directions to obtain a view angle sequence; intercepting an original view from the three-dimensional model to obtain a first view set; pre-cutting corresponding original views in the first view set according to longitude and latitude coordinate information acquired by the view sequence to obtain a second view set; performing VR panorama splicing on the cut views in the second view set according to corresponding longitude and latitude coordinate information to obtain a panorama; and uploading the panorama to a panorama platform, and then automatically generating the VR panorama displayed by 360 degrees. The invention creatively combines the three-dimensional model with the VR technology, can directly intercept a series of image materials from the corresponding three-dimensional model aiming at the target to be processed with high complexity and wide coverage range, and combines the panorama splicing technology to rapidly realize the display of the panorama.)

1. The VR panorama construction display method based on the three-dimensional model is characterized by comprising the following steps of:

acquiring a three-dimensional model of a target to be processed, and creating at least one origin point in the three-dimensional model;

acquiring a plurality of mutually related visual angle directions by taking an original point as a visual angle base point and taking longitude and latitude coordinate changes as adjusting directions to obtain a visual angle sequence;

intercepting at least one original view from the three-dimensional model according to each view direction in the view sequence to obtain a first view set;

pre-cutting corresponding original views in the first view set according to longitude and latitude coordinate information acquired by the view sequence to obtain a second view set;

performing VR panorama splicing on the cut views in the second view set according to corresponding longitude and latitude coordinate information to obtain a panorama;

and uploading the panorama to a panorama platform, and then automatically generating the VR panorama displayed by 360 degrees.

2. The three-dimensional model-based VR panorama construction display method of claim 1, wherein each of the origins is configured with a positioning tag; and after the original point is mistakenly moved in the original view intercepting process, the original point at any current position is returned to the initial position again by triggering the positioning label.

3. The three-dimensional model-based VR panorama construction display method of claim 1, wherein the obtaining of the view sequence specifically comprises:

acquiring a plurality of visual angle directions with different longitude coordinates under the same latitude, acquiring a plurality of original views one by one according to the change sequence of the longitude coordinates by the plurality of visual angle directions, and forming a latitude view group by the plurality of original views;

repeating the operation to obtain a plurality of latitude view groups at different latitudes;

generating space labels corresponding to the corresponding original views one by one according to the longitude coordinates and the latitude coordinates of each view angle direction;

and associating the space tags in the adjacent original views in the longitude direction and the latitude direction, and quickly positioning the cut views through the associated space tags during VR panorama splicing.

4. The three-dimensional model-based VR panorama construction display method of claim 3, wherein the variation interval of the view angle direction in the longitudinal direction and the latitudinal direction is in the range of 20-30 degrees, keeping the number of original views in each latitudinal view group to be more than 12; the number of original views intercepted by each origin is 80-120.

5. The three-dimensional model-based VR panorama construction display method of any one of claims 1-4, wherein the process of pre-cropping the original view to form a cropped view specifically comprises:

loading the corresponding original view into a three-dimensional coordinate space according to the view direction and the view base point;

respectively calculating to obtain a radius corresponding to an upper latitude cutting boundary and a radius corresponding to a lower latitude cutting boundary according to the latitude values of the visual angle directions corresponding to the original views;

determining an upper latitude cutting plane for pre-cutting in a three-dimensional coordinate space according to the radius corresponding to the upper latitude cutting boundary, and determining a lower latitude cutting plane for pre-cutting in the three-dimensional coordinate space according to the radius corresponding to the lower latitude cutting boundary;

and cutting the original view according to the upper latitude cutting plane and the lower latitude cutting plane to obtain a cut view.

6. The three-dimensional model-based VR panorama construction display method of claim 5, wherein a radius calculation formula corresponding to the upper latitude clipping boundary and a radius calculation formula corresponding to the lower latitude clipping boundary are specifically as follows:

wherein r is1Representing correspondence of upper latitude clipping boundary in original viewThe radius of (a); r is2Representing the radius corresponding to the lower latitude clipping boundary in the original view; r represents the maximum spherical radius of the original view; theta represents a latitude value of a view angle direction corresponding to the intercepted original view; k is a radical of1An offset coefficient representing an upper latitude clipping boundary; k is a radical of2An offset coefficient representing a lower latitude clipping boundary; δ represents the standard deviation degree of the latitude crop boundary.

7. The three-dimensional model-based VR panorama construction display method of claim 6, wherein the offset coefficient of the upper latitudinal clipping boundary and the offset coefficient of the lower latitudinal clipping boundary are calculated by the following formula:

wherein, theta1、θ2Respectively representing the latitude values of the view angle directions corresponding to the original views of two adjacent latitude coordinate changes.

8. The three-dimensional model-based VR panorama construction display method of claim 1, further comprising performing a preliminary revision to the panorama, the preliminary revision comprising:

if partial image missing exists in the panoramic image and the coincidence degree between adjacent cutting views is smaller than the standard coincidence degree, the spatial coordinates of the missing image are obtained through analysis according to the spatial labels in the adjacent cutting views corresponding to the missing image, and the image is intercepted again in the three-dimensional model according to the spatial coordinates of the missing image and then is fused and corrected;

and (4) control point calibration, wherein if pixel confusion exists during splicing and fusion of adjacent cutting views, the control points matched in the adjacent cutting views are corrected, and each cutting view is configured with 3-5 control points.

9. VR panorama structure display system based on three-dimensional model, characterized by includes:

the model building module is used for obtaining a three-dimensional model of a target to be processed and creating at least one origin point in the three-dimensional model;

the visual angle distribution module is used for acquiring a plurality of visual angle directions which are mutually related in a mode that the original point is used as a visual angle base point and the longitude and latitude coordinate change is used as an adjusting direction to obtain a visual angle sequence;

the image intercepting module is used for intercepting at least one original view from the three-dimensional model according to each view direction in the view sequence to obtain a first view set;

the image cutting module is used for pre-cutting the corresponding original view in the first view set according to the longitude and latitude coordinate information acquired by the view sequence to obtain a second view set;

the panorama splicing module is used for carrying out VR panorama splicing on the cut views in the second view set according to corresponding longitude and latitude coordinate information to obtain a panorama;

and the panorama display module is used for automatically generating the VR panorama displayed by 360 degrees after uploading the panorama to the panorama platform.

10. A computer terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method for constructing and displaying the VR panorama based on the three-dimensional model according to any one of claims 1 to 8 when executing the program.

Technical Field

The invention relates to the technical field of image processing, in particular to a VR panorama construction display method, a VR panorama construction display system and a VR panorama construction display terminal based on a three-dimensional model.

Background

The panoramic view is intended to represent the surrounding environment as much as possible by means of wide-angle representation and in the form of drawings, photographs, videos, three-dimensional models, and the like. The panoramic technology mainly comprises the steps of capturing image information of a whole scene through professional cameras such as a fisheye camera or the like or pictures rendered by using modeling software, splicing the pictures by using the software, playing the pictures by using a special player, namely changing plane pictures or computer modeling pictures into 360-degree full views for virtual reality browsing, and simulating a two-dimensional plane picture into a real three-dimensional space and then presenting the real three-dimensional space to an observer.

The traditional panoramic technology needs to shoot the surrounding without dead angles through a professional fisheye camera, and obtains a panoramic picture through a combination algorithm matched with feature points. However, due to the factors of large coverage of the panoramic display object, complex internal structure, high use cost of professional cameras such as fisheye cameras and the like, the panoramic display realized by the traditional panoramic technology has high input cost, long realization period and high realization difficulty. For example, the coverage in traffic engineering is extremely wide, and shooting by a professional camera not only has a lot of arrangement data and great arrangement difficulty, but also has slow data transmission for some areas with poor network environment; for another example, for the exhibition of the internal environment of a building, limited by the visual range, with the conventional panoramic technology, a separate professional camera needs to be arranged even in a separate area with a small space.

In recent years, the BIM technology is widely applied in various fields, the combination of the BIM technology and the VR technology has wide application prospect for panoramic display, and the traditional panoramic technology cannot be directly applied to a virtual three-dimensional model. Therefore, how to research and design a method, a system and a terminal for displaying a VR panorama construction based on a three-dimensional model is a problem which is urgently needed to be solved at present.

Disclosure of Invention

In order to overcome the defects in the prior art, the invention aims to provide a VR panorama construction display method, a VR panorama construction display system and a VR panorama construction display terminal based on a three-dimensional model.

The technical purpose of the invention is realized by the following technical scheme:

in a first aspect, a three-dimensional model-based VR panorama construction display method is provided, which includes the following steps:

acquiring a three-dimensional model of a target to be processed, and creating at least one origin point in the three-dimensional model;

acquiring a plurality of mutually related visual angle directions by taking an original point as a visual angle base point and taking longitude and latitude coordinate changes as adjusting directions to obtain a visual angle sequence;

intercepting at least one original view from the three-dimensional model according to each view direction in the view sequence to obtain a first view set;

pre-cutting corresponding original views in the first view set according to longitude and latitude coordinate information acquired by the view sequence to obtain a second view set;

performing VR panorama splicing on the cut views in the second view set according to corresponding longitude and latitude coordinate information to obtain a panorama;

and uploading the panorama to a panorama platform, and then automatically generating the VR panorama displayed by 360 degrees.

Further, each origin is provided with a positioning label; and after the original point is mistakenly moved in the original view intercepting process, the original point at any current position is returned to the initial position again by triggering the positioning label.

Further, the process of acquiring the view sequence specifically includes:

acquiring a plurality of visual angle directions with different longitude coordinates under the same latitude, acquiring a plurality of original views one by one according to the change sequence of the longitude coordinates by the plurality of visual angle directions, and forming a latitude view group by the plurality of original views;

repeating the operation to obtain a plurality of latitude view groups at different latitudes;

generating space labels corresponding to the corresponding original views one by one according to the longitude coordinates and the latitude coordinates of each view angle direction;

and associating the space tags in the adjacent original views in the longitude direction and the latitude direction, and quickly positioning the cut views through the associated space tags during VR panorama splicing.

Further, the variation interval of the view angle direction in the longitude direction and the latitude direction ranges from 20 degrees to 30 degrees, and the number of original views in each latitude view group is kept to be more than 12; the number of original views intercepted by each origin is 80-120.

Further, the process of pre-cropping the original view to form the cropped view specifically includes:

loading the corresponding original view into a three-dimensional coordinate space according to the view direction and the view base point;

respectively calculating to obtain a radius corresponding to an upper latitude cutting boundary and a radius corresponding to a lower latitude cutting boundary according to the latitude values of the visual angle directions corresponding to the original views;

determining an upper latitude cutting plane for pre-cutting in a three-dimensional coordinate space according to the radius corresponding to the upper latitude cutting boundary, and determining a lower latitude cutting plane for pre-cutting in the three-dimensional coordinate space according to the radius corresponding to the lower latitude cutting boundary;

and cutting the original view according to the upper latitude cutting plane and the lower latitude cutting plane to obtain a cut view.

Further, the radius calculation formula corresponding to the upper latitude cutting boundary and the radius calculation formula corresponding to the lower latitude cutting boundary are specifically as follows:

wherein r is1Representing the radius corresponding to the upper latitude clipping boundary in the original view; r is2Representing the radius corresponding to the lower latitude clipping boundary in the original view; r represents the maximum spherical radius of the original view; theta represents a latitude value of a view angle direction corresponding to the intercepted original view; k is a radical of1An offset coefficient representing an upper latitude clipping boundary; k is a radical of2An offset coefficient representing a lower latitude clipping boundary; δ represents the standard deviation degree of the latitude crop boundary.

Further, the calculation formula of the offset coefficient of the upper latitude clipping boundary and the offset coefficient of the lower latitude clipping boundary is as follows:

wherein, theta1、θ2Respectively representing two adjacent latitude coordinate variationsAnd the latitude value of the view angle direction corresponding to the original view.

Further, the method further comprises performing a preliminary correction on the panorama, wherein the preliminary correction comprises:

if partial image missing exists in the panoramic image and the coincidence degree between adjacent cutting views is smaller than the standard coincidence degree, the spatial coordinates of the missing image are obtained through analysis according to the spatial labels in the adjacent cutting views corresponding to the missing image, and the image is intercepted again in the three-dimensional model according to the spatial coordinates of the missing image and then is fused and corrected;

and (4) control point calibration, wherein if pixel confusion exists during splicing and fusion of adjacent cutting views, the control points matched in the adjacent cutting views are corrected, and each cutting view is configured with 3-5 control points.

In a second aspect, a three-dimensional model-based VR panorama construction display system is provided, comprising:

the model building module is used for obtaining a three-dimensional model of a target to be processed and creating at least one origin point in the three-dimensional model;

the visual angle distribution module is used for acquiring a plurality of visual angle directions which are mutually related in a mode that the original point is used as a visual angle base point and the longitude and latitude coordinate change is used as an adjusting direction to obtain a visual angle sequence;

the image intercepting module is used for intercepting at least one original view from the three-dimensional model according to each view direction in the view sequence to obtain a first view set;

the image cutting module is used for pre-cutting the corresponding original view in the first view set according to the longitude and latitude coordinate information acquired by the view sequence to obtain a second view set;

the panorama splicing module is used for carrying out VR panorama splicing on the cut views in the second view set according to corresponding longitude and latitude coordinate information to obtain a panorama;

and the panorama display module is used for automatically generating the VR panorama displayed by 360 degrees after uploading the panorama to the panorama platform.

In a third aspect, a computer terminal is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the processor implements the method for constructing and displaying the VR panorama based on the three-dimensional model according to any one of the first aspect.

Compared with the prior art, the invention has the following beneficial effects:

1. the invention creatively combines the three-dimensional model with the VR technology, can directly intercept a series of image materials from the corresponding three-dimensional model aiming at the target to be processed with high complexity and wide coverage range, and combines the panorama splicing technology to rapidly realize the display of the panorama;

2. according to the invention, the original point is established in the three-dimensional model, so that the situation that the intercepted image material has errors due to the integral movement of the three-dimensional model in the process of intercepting the image material can be reduced, and the quick reset of the original point in the three-dimensional model is realized through the positioning label;

3. according to the method, the spatial tags are arranged on each image material, and the spatial tags are arranged according to the relevance, so that one-key quick positioning and splicing can be realized in the panoramic image splicing process, the panoramic image is corrected, secondary interception of the image materials can be directly completed according to the spatial tags, and the overall operation is simple and convenient;

4. according to the invention, the original views under different longitude and latitude coordinates are pre-cut through the calculated radius corresponding to the upper latitude cutting boundary and the calculated radius corresponding to the lower latitude cutting boundary, so that the overlapping degree distribution of adjacent cutting views during fusion splicing is uniform, and the adaptability adjustment can be carried out along with the change of the longitude and latitude coordinates, thereby enabling the whole panoramic image to be smoothly spliced.

Drawings

The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:

FIG. 1 is a flow chart in an embodiment of the invention;

fig. 2 is a block diagram of a system in an embodiment of the invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.

Example 1: the VR panorama construction display method based on the three-dimensional model, as shown in FIG. 1, comprises the following steps:

s1: acquiring a three-dimensional model of a target to be processed, and creating at least one origin point in the three-dimensional model;

s2: acquiring a plurality of mutually related visual angle directions by taking an original point as a visual angle base point and taking longitude and latitude coordinate changes as adjusting directions to obtain a visual angle sequence;

s3: intercepting at least one original view from the three-dimensional model according to each view direction in the view sequence to obtain a first view set;

s4: pre-cutting corresponding original views in the first view set according to longitude and latitude coordinate information acquired by the view sequence to obtain a second view set;

s5: performing VR panorama splicing on the cut views in the second view set according to corresponding longitude and latitude coordinate information to obtain a panorama;

s6: and uploading the panorama to a panorama platform, and then automatically generating the VR panorama displayed by 360 degrees.

It should be noted that the origin corresponds to the center position of the globe theodolite, and an original view is acquired from the origin in each direction of the body. The original view obtained by each origin can form a panoramic image, a plurality of panoramic images can be combined into a sand table, and the VR panoramic view can be switched at will on the origin of the sand table during display.

In step S2, each origin is provided with a positioning tag; and after the original point is mistakenly moved in the original view intercepting process, the original point at any current position is returned to the initial position again by triggering the positioning label.

In step S2, the process of acquiring the view sequence specifically includes:

s201: acquiring a plurality of visual angle directions with different longitude coordinates under the same latitude, acquiring a plurality of original views one by one according to the change sequence of the longitude coordinates by the plurality of visual angle directions, and forming a latitude view group by the plurality of original views;

s202: repeating the operation to obtain a plurality of latitude view groups at different latitudes;

s203: generating space labels corresponding to the corresponding original views one by one according to the longitude coordinates and the latitude coordinates of each view angle direction;

s204: and associating the space tags in the adjacent original views in the longitude direction and the latitude direction, and quickly positioning the cut views through the associated space tags during VR panorama splicing.

In step S2, the variation interval of the viewing angle direction in the longitude direction and the latitude direction is in the range of 20 ° -30 °, keeping the number of original views in each latitude view group greater than 12; the number of original views intercepted by each origin is 80-120.

For example, after the origin position is determined, the viewing angle is switched to the view angle right below, i.e., south pole viewing angle of 90 degrees from the origin to south latitude. A first image is captured and stored by using a screen capture snapshot function in Infraworks and other software, and the first image can be used as a ground image.

After the ground image is determined, a path setting function is used in Infraworks, the visual angle is raised by about 30 degrees from the south-pole visual angle, the visual angle is about 60 degrees in the south latitude at the moment, the current visual angle is the initial visual angle of 60 degrees in the south latitude, and a screen capture snapshot function is used for capturing and storing a second image, so that the second image is used as the starting point of the visual angle of 60 degrees in the south latitude. And transversely switching by about 30 degrees by using a path setting function, translating the visual angle by about 30 degrees under the condition of keeping the mass center unchanged, and creating and storing the snapshot again. For example, the starting point of the view angle of 60 degrees of south latitude is 60 degrees of east longitude, and 60 degrees of south latitude, the view angle is translated into 90 degrees of east longitude and 60 degrees of south latitude. The image is cut out once every 30 degrees to the left or right in turn, and the circle cutting is repeated until the original 60-degree view angle (starting point) of the south latitude is returned.

By analogy, in Infraworks, the visual angle is lifted from 60 degrees in south latitude to 30 degrees in south latitude by using the path setting function, and an image is acquired and stored after being circularly cut for one circle. And then, acquiring and storing image materials at an equatorial visual angle, a northern latitude 30 degree visual angle and a northern latitude 60 degree visual angle in sequence. In this embodiment, the image resolution is in 1920 × 1080 high definition format.

In addition, the sky image corresponds to the ground image, i.e., the processing of the arctic viewing angle from the origin is slightly less important because the subjective viewing angle is generally not upward. The image can be acquired and stored by lifting the view angle and circularly cutting a circle according to the existing method, and the image can also be processed by correspondingly simplifying the function of a dropper in software such as Photoshop and the like.

In step S4, the overlapping ratio between adjacent image materials needs to reach a certain overlapping ratio to achieve good stitching effect. Therefore, the process of performing pre-cropping on the original view to form the cropped view specifically includes:

s401: loading the corresponding original view into a three-dimensional coordinate space according to the view direction and the view base point;

s402: respectively calculating to obtain a radius corresponding to an upper latitude cutting boundary and a radius corresponding to a lower latitude cutting boundary according to the latitude values of the visual angle directions corresponding to the original views;

s403: determining an upper latitude cutting plane for pre-cutting in a three-dimensional coordinate space according to the radius corresponding to the upper latitude cutting boundary, and determining a lower latitude cutting plane for pre-cutting in the three-dimensional coordinate space according to the radius corresponding to the lower latitude cutting boundary;

s404: and cutting the original view according to the upper latitude cutting plane and the lower latitude cutting plane to obtain a cut view.

It should be noted that, the process of obtaining the latitude view sets is performed according to a circular path, so each latitude view set can form a ball strip, and the upper latitude cutting boundary and the lower latitude cutting boundary in this embodiment are substantially parallel to two sides of the ball strip.

The radius calculation formula corresponding to the upper latitude cutting boundary and the radius calculation formula corresponding to the lower latitude cutting boundary are specifically as follows:

wherein r is1Representing correspondence of upper latitude clipping boundary in original viewA radius; r is2Representing the radius corresponding to the lower latitude clipping boundary in the original view; r represents the maximum spherical radius of the original view; theta represents a latitude value of a view angle direction corresponding to the intercepted original view; k is a radical of1An offset coefficient representing an upper latitude clipping boundary; k is a radical of2An offset coefficient representing a lower latitude clipping boundary; δ represents the standard deviation degree of the latitude crop boundary.

The calculation formulas of the offset coefficient of the upper latitude cutting boundary and the offset coefficient of the lower latitude cutting boundary are as follows:

wherein, theta1、θ2Respectively representing the latitude values of the view angle directions corresponding to the original views of two adjacent latitude coordinate changes.

The VR panorama construction display method based on the three-dimensional model further comprises the following processing of preliminary correction and peripheral repair and the like of the panorama. The preliminary correction includes but is not limited to image missing and control point calibration, the preliminary correction operation can be performed by using software such as Infraworks and PTGui, and the peripheral repair can be performed by using software such as Photoshop.

Image missing: if partial image missing exists in the panoramic image and the contact ratio between adjacent cutting views is smaller than the standard contact ratio, the spatial coordinates of the missing image are obtained through analysis according to the spatial labels in the adjacent cutting views corresponding to the missing image, and the images are intercepted again in the three-dimensional model according to the spatial coordinates of the missing image and then are fused and corrected.

And (3) control point calibration: and if the adjacent clipping views have disordered pixels during splicing and fusion, correcting the matched control points in the adjacent clipping views, wherein each clipping view is provided with 3-5 control points.

And (3) peripheral repairing, namely repairing the periphery if the edge part of the panorama after the correction is finished has some unevenness and the marked line is a slight sawtooth shape at the outermost side of the panorama. For example, the color of the same layer is uniformly blended by using a magic stick in Photoshop software, and the color of the edge part of the outer sawtooth wave-shaped part is picked up by using a suction pipe, so that the color is uniformly blended. And finally, according to the length-width ratio of 2: the ratio of 1 is saved into a JPG picture format, and the final panoramic picture is finished.

The completed panorama can be displayed on a plurality of panorama platforms, which are briefly described as 720 yun. Uploading the completed panoramic image to a 720yun webpage platform automatically generates a 360-degree VR panorama. On this basis, can also do more adjustment and optimization, make VR panorama content abundanter, reinforcing interactive experience. The hot spot function can add the pictures in the model into the VR panorama in the form of characteristic points so as to achieve the effect of amplifying and checking a specific target. The special effect function can simulate weather, so that the VR panorama becomes fresh and alive. The sand table function can upload an integral graph as the sand table, different points on the sand table are different visual angles, each point contains a complete panoramic graph, and the panoramic graph is integrated on the sand table to serve as a large VR panorama. And finally, uploading the virtual reality to a cloud end, generating a work link or a work two-dimensional code, and networking at any client end to check the VR panorama.

Example 2: a VR panorama constructing and displaying system based on a three-dimensional model, in this embodiment, the VR panorama constructing and displaying system may implement the VR panorama constructing and displaying method described in embodiment 1, and as shown in fig. 2, the VR panorama constructing and displaying system includes a model constructing module, a view angle allocating module, an image capturing module, an image clipping module, a panorama stitching module, and a panorama displaying module.

The model building module is used for obtaining a three-dimensional model of a target to be processed and creating at least one origin point in the three-dimensional model; the visual angle distribution module is used for acquiring a plurality of visual angle directions which are mutually related in a mode that the original point is used as a visual angle base point and the longitude and latitude coordinate change is used as an adjusting direction to obtain a visual angle sequence; the image intercepting module is used for intercepting at least one original view from the three-dimensional model according to each view direction in the view sequence to obtain a first view set; the image cutting module is used for pre-cutting the corresponding original view in the first view set according to the longitude and latitude coordinate information acquired by the view sequence to obtain a second view set; the panorama splicing module is used for carrying out VR panorama splicing on the cut views in the second view set according to corresponding longitude and latitude coordinate information to obtain a panorama; and the panorama display module is used for automatically generating the VR panorama displayed by 360 degrees after uploading the panorama to the panorama platform.

The working principle is as follows: the invention creatively combines the three-dimensional model with the VR technology, can directly intercept a series of image materials from the corresponding three-dimensional model aiming at the target to be processed with high complexity and wide coverage range, and combines the panorama splicing technology to rapidly realize the display of the panorama; according to the invention, the original point is established in the three-dimensional model, so that the situation that the intercepted image material has errors due to the integral movement of the three-dimensional model in the process of intercepting the image material can be reduced, and the quick reset of the original point in the three-dimensional model is realized through the positioning label; according to the method, the spatial tags are arranged on each image material, and the spatial tags are arranged according to the relevance, so that one-key quick positioning and splicing can be realized in the panoramic image splicing process, the panoramic image is corrected, secondary interception of the image materials can be directly completed according to the spatial tags, and the overall operation is simple and convenient; according to the invention, the original views under different longitude and latitude coordinates are pre-cut through the calculated radius corresponding to the upper latitude cutting boundary and the calculated radius corresponding to the lower latitude cutting boundary, so that the overlapping degree distribution of adjacent cutting views during fusion splicing is uniform, and the adaptability adjustment can be carried out along with the change of the longitude and latitude coordinates, thereby enabling the whole panoramic image to be smoothly spliced.

As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像生成方法、装置及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!