Three-dimensional digital earth construction method for avionic display control

文档序号:192811 发布日期:2021-11-02 浏览:47次 中文

阅读说明:本技术 一种用于航电显控的三维数字地球构建方法 (Three-dimensional digital earth construction method for avionic display control ) 是由 孙亮 于 2021-08-02 设计创作,主要内容包括:本发明涉及一种用于航电显控的三维数字地球构建方法,包括读取地形文件:读取加载地形文件,解析为地形数据后生成原始地形数据到内存中;计算顶点数据:计算生成顶点属性数据;传输顶点数据:计算空间大小并进行数据传输交互;创建着色器:生成顶点着色器和片段着色器的源代码,创建对应的可执行逻辑单元;渲染准备及渲染:在渲染前将相应的参数传递给着色器,并将解析的VBO数据传递给顶点着色器,调用绘图指令glDrawElements进行渲染,执行顶点着色器和片段着色器,并将渲染结果输出到窗口中以显示三维数字地球。本发明通过采用数据切分和复用的方式,即保证了顶点数据全部写入VBO,也可以避免分配大块显存而失败的情况。(The invention relates to a three-dimensional digital earth construction method for avionics display control, which comprises the following steps of reading a terrain file: reading the loaded terrain file, analyzing the loaded terrain file into terrain data, and generating original terrain data to a memory; calculating vertex data: calculating and generating vertex attribute data; transmitting vertex data: calculating the size of the space and performing data transmission interaction; creating a shader: generating source codes of a vertex shader and a fragment shader, and creating corresponding executable logic units; rendering preparation and rendering: and transmitting corresponding parameters to a shader before rendering, transmitting the analyzed VBO data to a vertex shader, calling drawing instructions glDrawElements for rendering, executing the vertex shader and the fragment shader, and outputting a rendering result to a window to display the three-dimensional digital earth. The invention ensures that the vertex data is completely written into VBO by adopting the data segmentation and multiplexing modes, and can also avoid the condition of failure in distributing the large-block video memory.)

1. A three-dimensional digital earth construction method for avionic display control is characterized by comprising the following steps: the construction method comprises the following steps:

reading a terrain file: reading the loaded terrain file by adopting a GDAL library, analyzing the loaded terrain file into terrain data, and generating original terrain data to a memory;

calculating vertex data: calculating vertex attributes including three-dimensional world coordinates, normal lines and texture coordinates and generating vertex attribute data;

transmitting vertex data: calculating the size of the space and performing data transmission interaction;

creating a shader: generating source codes of a vertex shader and a fragment shader by using a GLSL (global system level translation) language, and creating a corresponding executable logic unit in a GPU (graphics processing unit);

preparing for rendering: transmitting corresponding parameters to a shader before rendering, and transmitting the analyzed VBO data to a vertex shader;

rendering: and calling drawing instructions glDrawElements for rendering, respectively executing the created vertex shader and the created fragment shader by adopting a modern OpenGL rendering method, and outputting a rendering result to a window to display the three-dimensional digital earth.

2. The method for constructing the three-dimensional digital earth for the avionic display control according to the claim 1, characterized in that: the calculating vertex data specifically includes:

knowing the longitude and latitude of the starting point, calculating the longitude and latitude of the vertex according to the two-dimensional coordinates of the vertex, wherein the height value of the vertex is the value of the two-dimensional coordinates in the grid metadata, and then calculating the world coordinates by referring to a world coordinate formula in the camera;

selecting two vertexes adjacent to the current vertex according to the current vertex, calculating vectors between the two vertexes and the current vertex, and performing cross multiplication on the two vectors to obtain a normal vector;

and generating texture according to the generated color table file, calculating texture coordinates of the vertex, and then performing texture and acquiring the color of the vertex.

3. The method for constructing the three-dimensional digital earth for avionic display control according to claim 2, characterized by comprising the following steps: generating texture according to the generated color table file, calculating texture coordinates of the vertex, and then performing texture acquisition to obtain the vertex color specifically comprises the following steps:

generating a one-dimensional color chart file through a third-party tool;

creating a one-dimensional texture according to a one-dimensional color table file, wherein the width of the texture represents the total number of colors, writing texture parameters in a memory into a display memory through a glTexImage1D function, and mapping a color table to a range between 0 and 1 when the coordinate range of the texture in OpenGL is between 0 and 1, wherein the texture coordinate 0 corresponds to a first color in the color table, and 1 corresponds to a last color;

and inputting the height value, the maximum height value and the minimum height value of the current vertex to calculate to obtain the texture coordinate of the vertex, so that the mapping of the height of the vertex to the range between 0 and 1 is realized, and the mapping is consistent with the texture coordinate in the video memory.

The number of the input texture and the texture coordinates of the vertex are called in the fragment shader to obtain the vertex color from the texture.

4. The method for constructing the three-dimensional digital earth for the avionic display control according to the claim 1, characterized in that: the transmitting vertex data specifically includes:

calculating the size of space, creating a VBO object by using glGenBuffers, setting the type of the VBO to be GL _ ARRAY _ BUFFER, storing vertex data in the VBO, providing the vertex data for a vertex shader to use, distributing a data storage space for the currently bound VBO by using a glBufferData function, writing the vertex data in a memory into the space, and writing the last parameter in the glBufferData to be GL _ STATIC _ DRAW to show that the data storage content is initialized only once so as to be beneficial to space distribution of the GPU.

5. The method for constructing the three-dimensional digital earth for avionic display control according to claim 4, characterized by comprising the following steps: if the VBO fails to distribute the storage space, a large block of data is segmented into a plurality of small blocks of data with the same capacity by adopting a data segmentation and multiplexing mode, then the small blocks of data are transmitted to the video memory, and finally rendering is carried out.

6. The method for constructing the three-dimensional digital earth for avionic display control according to claim 5, characterized by comprising the following steps: the data segmentation and multiplexing step comprises:

setting the capacity of a single data block to be 6MB, and calculating the number of the top lines which can be accommodated according to the capacity of the data block;

calculating the number of required data blocks according to the total number of the vertexes, and creating VBOs with the same number;

allocating memory space according to the capacity of the data block, and traversing the vertex arrays;

the vertices holding the number of rows are read, the vertices are computed, and the vertices are transmitted in sequence.

7. The method for constructing the three-dimensional digital earth for the avionic display control according to the claim 1, characterized in that: the creating a shader specifically includes:

the glGetShaderiv function creates a shader program, sets source codes of a vertex shader and a fragment shader through a glShaderSource function, and compiles the source codes according to a glComplieShader function;

the glAttachShader function binds the vertex shader and fragment shader to shader programs and connects the shader programs through the glLinProgram function.

8. The method for constructing the three-dimensional digital earth for the avionic display control according to the claim 1, characterized in that: the rendering preparation specifically includes:

running a shader program through a glUseProgram function, and starting an input attribute in the shader program;

setting a perspective matrix, an observation matrix and a model matrix variable corresponding to a vertex shader, and setting a corresponding illumination parameter and a texture number corresponding to a fragment shader;

and binding the VBO data into the video memory, transmitting the analyzed VBO data to a vertex shader, and associating the structure of the data with the input attribute by the vertex shader.

9. The method for constructing the three-dimensional digital earth for the avionic display control according to the claim 1, characterized in that: the modern OpenGL rendering method comprises the following steps:

a vertex shader step: the system comprises a primitive assembling step, a primitive assembling step and a processing step, wherein the primitive assembling step is used for converting 3D coordinates transmitted and input in an array form into another 3D coordinate and then inputting the other 3D coordinate into the primitive assembling step;

and (3) primitive assembling: a step of inputting all the points into a geometry shader after being assembled into a specified primitive shape;

geometry shader step: the system comprises a rasterization step, a primitive shape generation step and a primitive drawing step, wherein the geometry of a series of vertexes of the primitive shape is used as input, new vertexes are generated to construct a new primitive to generate other primitive shapes, and the new primitive shapes are input to the rasterization step;

and a rasterization step: the system is used for mapping the graphics primitives to corresponding pixels on a final screen to generate fragments for the fragment shader unit to use;

a fragment shader step: for calculating the final color of a pixel from the contained 3D scene data

Testing and mixing steps: for detecting depth values and template values of the fragments to be used for determining positions of the pixels, and detecting alpha values and blending the object.

Technical Field

The invention relates to the technical field of avionic display control, in particular to a three-dimensional digital earth construction method for avionic display control.

Background

The display and control interface is a medium for interaction and information exchange between the system and the user, and realizes conversion between the internal form of the information and the human acceptable form. The development process of the avionic display control system, which is one of the important parts of the avionic system, respectively goes through five processes of a first generation aircraft instrument, an electromechanical servo instrument period, a comprehensive guide instrument, a CRT (cathode ray tube) electro-optic display instrument and a modern display control system from the beginning to the present.

However, the current mainstream avionics display interface design platform (such as VAPS XT) can only provide the visual development function of a two-dimensional display control interface, does not have the function of a three-dimensional view, and two-dimension cannot meet the requirement of the current battlefield situation; the three-dimensional digital earth is based on the traditional geographic information system technology, uses the virtual reality of a real three-dimensional scene, has the characteristics of intuition, visualization, high efficiency and the like, and is also the most important component in the avionic display control device, so that how to construct the three-dimensional digital earth in the three-dimensional avionic display control device is urgently needed to be solved at the present stage.

Disclosure of Invention

The invention aims to overcome the defects of the prior art and provides a three-dimensional digital earth construction method for avionic display control, which can realize the construction of a three-dimensional digital earth in a three-dimensional avionic display control device.

The purpose of the invention is realized by the following technical scheme: a three-dimensional digital earth construction method for avionic display control, comprising the following steps:

reading a terrain file: reading the loaded terrain file by adopting a GDAL library, analyzing the loaded terrain file into terrain data, and generating original terrain data to a memory;

calculating vertex data: calculating vertex attributes including three-dimensional world coordinates, normal lines and texture coordinates and generating vertex attribute data;

transmitting vertex data: calculating the size of the space and performing data transmission interaction;

creating a shader: generating source codes of a vertex shader and a fragment shader by using a GLSL (global system level translation) language, and creating a corresponding executable logic unit in a GPU (graphics processing unit);

preparing for rendering: transmitting corresponding parameters to a shader before rendering, and transmitting the analyzed VBO data to a vertex shader;

rendering: and calling drawing instructions glDrawElements for rendering, respectively executing the created vertex shader and the created fragment shader by adopting a modern OpenGL rendering method, and outputting a rendering result to a window to display the three-dimensional digital earth.

The calculating vertex data specifically includes:

knowing the longitude and latitude of the starting point, calculating the longitude and latitude of the vertex according to the two-dimensional coordinates of the vertex, wherein the height value of the vertex is the value of the two-dimensional coordinates in the grid metadata, and then calculating the world coordinates by referring to a world coordinate formula in the camera;

selecting two vertexes adjacent to the current vertex according to the current vertex, calculating vectors between the two vertexes and the current vertex, and performing cross multiplication on the two vectors to obtain a normal vector;

and generating texture according to the generated color table file, calculating texture coordinates of the vertex, and then performing texture and acquiring the color of the vertex.

Generating texture according to the generated color table file, calculating texture coordinates of the vertex, and then performing texture acquisition to obtain the vertex color specifically comprises the following steps:

generating a one-dimensional color chart file through a third-party tool;

creating a one-dimensional texture according to a one-dimensional color table file, wherein the width of the texture represents the total number of colors, writing texture parameters in a memory into a display memory through a glTexImage1D function, and mapping a color table to a range between 0 and 1 when the coordinate range of the texture in OpenGL is between 0 and 1, wherein the texture coordinate 0 corresponds to a first color in the color table, and 1 corresponds to a last color;

and inputting the height value, the maximum height value and the minimum height value of the current vertex to calculate to obtain the texture coordinate of the vertex, so that the mapping of the height of the vertex to the range between 0 and 1 is realized, and the mapping is consistent with the texture coordinate in the video memory.

The number of the input texture and the texture coordinates of the vertex are called in the fragment shader to obtain the vertex color from the texture.

The transmitting vertex data specifically includes:

calculating the size of space, creating a VBO object by using glGenBuffers, setting the type of the VBO to be GL _ ARRAY _ BUFFER, storing vertex data in the VBO, providing the vertex data for a vertex shader to use, distributing a data storage space for the currently bound VBO by using a glBufferData function, writing the vertex data in a memory into the space, and writing the last parameter in the glBufferData to be GL _ STATIC _ DRAW to show that the data storage content is initialized only once so as to be beneficial to space distribution of the GPU.

If the VBO fails to distribute the storage space, a large block of data is segmented into a plurality of small blocks of data with the same capacity by adopting a data segmentation and multiplexing mode, then the small blocks of data are transmitted to the video memory, and finally rendering is carried out.

The data segmentation and multiplexing step comprises:

setting the capacity of a single data block to be 6MB, and calculating the number of the top lines which can be accommodated according to the capacity of the data block;

calculating the number of required data blocks according to the total number of the vertexes, and creating VBOs with the same number;

allocating memory space according to the capacity of the data block, and traversing the vertex arrays;

the vertices holding the number of rows are read, the vertices are computed, and the vertices are transmitted in sequence.

The creating a shader specifically includes:

the glGetShaderiv function creates a shader program, sets source codes of a vertex shader and a fragment shader through a glShaderSource function, and compiles the source codes according to a glComplieShader function;

the glAttachShader function binds the vertex shader and fragment shader to shader programs and connects the shader programs through the glLinProgram function.

The rendering preparation specifically includes:

running a shader program through a glUseProgram function, and starting an input attribute in the shader program;

setting a perspective matrix, an observation matrix and a model matrix variable corresponding to a vertex shader, and setting a corresponding illumination parameter and a texture number corresponding to a fragment shader;

and binding the VBO data into the video memory, transmitting the analyzed VBO data to a vertex shader, and associating the structure of the data with the input attribute by the vertex shader.

The modern OpenGL rendering method comprises the following steps:

a vertex shader step: the system comprises a primitive assembling step, a primitive assembling step and a processing step, wherein the primitive assembling step is used for converting 3D coordinates transmitted and input in an array form into another 3D coordinate and then inputting the other 3D coordinate into the primitive assembling step;

and (3) primitive assembling: a step of inputting all the points into a geometry shader after being assembled into a specified primitive shape;

geometry shader step: the system comprises a rasterization step, a primitive shape generation step and a primitive drawing step, wherein the geometry of a series of vertexes of the primitive shape is used as input, new vertexes are generated to construct a new primitive to generate other primitive shapes, and the new primitive shapes are input to the rasterization step;

and a rasterization step: the system is used for mapping the graphics primitives to corresponding pixels on a final screen to generate fragments for the fragment shader unit to use;

a fragment shader step: for calculating the final color of a pixel from the contained 3D scene data

Testing and mixing steps: for detecting depth values and template values of the fragments to be used for determining positions of the pixels, and detecting alpha values and blending the object.

The invention has the following advantages: a three-dimensional digital earth construction method for avionic display control is superior to the traditional OpenGL in the adoption of a modern OpenGL rendering method, and particularly, the efficiency advantage is more obvious when the data volume is larger; by adopting the data segmentation and multiplexing mode, the method ensures that the vertex data is completely written into the VBO, and can also avoid the condition of failure in distributing the large-block video memory.

Drawings

FIG. 1 is a schematic flow diagram of the process of the present invention;

FIG. 2 is a schematic diagram of a data transmission process;

FIG. 3 is a schematic diagram of a data slicing and multiplexing process;

fig. 4 is a schematic diagram of a preparation flow before rendering.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments of the present application provided below in connection with the appended drawings is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. The invention is further described below with reference to the accompanying drawings.

As shown in fig. 1, the present invention relates to a three-dimensional digital earth construction method for avionics display control, the construction method comprising:

s1, reading a terrain file: reading the loaded terrain file by adopting a GDAL library, analyzing the loaded terrain file into terrain data, and generating original terrain data to a memory;

specifically, a GDAL library is adopted to read a terrain file, and the format of the file is tiff raster data; the GDAL library is called as a geospatial data abstract library, which is a software library for reading grid and vector geospatial data formats and is issued by an open source geospatial fund under an X/MIT license agreement. As a library, it provides a single abstract data model for an application to parse all the formats it supports.

A projection mode WGS-84 geocentric coordinate system in the grid data, which comprises a set of standard longitude and latitude coordinate systems of the earth; metadata in the raster file represents a terrain height value in meters; the sampling precision of the data represents the range size of each data point in the longitudinal and latitudinal directions; the start point coordinates represent the latitude and longitude of the upper left-hand position of the grid. The latitude and longitude of any vertex can be calculated according to the coordinates of the starting point.

S2, calculating vertex data: calculating vertex attributes including three-dimensional world coordinates, normal lines and texture coordinates and generating vertex attribute data;

further, the raster data includes information on the longitude and latitude height of the vertex, and the world coordinate, the normal line, and the texture coordinate of the vertex are calculated based on the longitude and latitude height of the vertex, respectively, and constitute vertex data. World coordinates are used for calculation of spatial positions in a vertex shader, vertex normals are used for calculation of illumination colors in a fragment shader, and texture coordinates of the vertices are used for obtaining colors of the vertices. The final color of the vertex combines the color of the vertex itself with the illumination color.

The vertex data comprises three attributes of world coordinates, normal and texture coordinates of the vertex;

wherein, the world coordinate of the calculation vertex is: firstly, calculating the longitude and latitude of each vertex, and then converting the longitude and latitude into world coordinates; the method specifically comprises the following steps: the two-dimensional coordinates of a vertex indicate the amount of offset of the vertex from the starting point. And knowing the longitude and latitude of the starting point, and calculating the longitude and latitude of the vertex according to the two-dimensional coordinates of the vertex. The height value of a vertex is the value of its two-dimensional coordinates in the raster metadata;

input variables are: longitude StartLongtitude of the start point, latitude startlattitude of the start point, row number Rows of the vertex, column number Cols of the vertex, raster metadata, sampling precision CellSize, two-dimensional coordinates OffsetX and OffsetY of the vertex; according to the calculation formula, long time + off time x CellSize, lattime start lattitude-off time y CellSize, and all time data. The output is obtained: longitude of the vertex Longtitude, Latitude of the vertex latititude, and Altitude of the vertex Altitude.

The normal to the vertex is calculated as: selecting two vertexes adjacent to the current vertex according to the current vertex, calculating vectors between the two vertexes and the current vertex, and performing cross multiplication on the two vectors to obtain a normal vector;

calculating the texture coordinates of the vertices as: the vertex color is sampled from the texture based on the texture coordinates of the vertex. Firstly, generating texture, calculating texture coordinates of a vertex, and finally, carrying out texture sampling by a fragment shader during rendering to obtain vertex color. The method specifically comprises the following steps:

in order to obtain a smoother elevation color effect, a one-dimensional color table file is generated through a third-party tool; the total number of colors in the color table, such as 256 or 64, may be set; the value of each color is in RGB format; the color of the key position can be set, and the color between two key positions is obtained through linear interpolation operation. Namely middlecolor (t) + endicolor t,0< ═ t < ═ 1;

acquiring a vertex color by texture acquisition, creating a one-dimensional texture according to a one-dimensional color table file, wherein the width of the texture represents the total number of colors, writing texture parameters in a memory into a video memory through a glTexImage1D function, and mapping a color table to a range between 0 and 1 when the coordinate range of the texture in OpenGL is between 0 and 1, wherein the texture coordinate 0 corresponds to a first color in the color table, and the texture coordinate 1 corresponds to a last color;

inputting the height value CurrentHeight, the maximum height value MaxHeight and the minimum height value MinHeight of the current vertex, calculating to obtain texture coordinates of the vertex according to a formula texCoord ═ (CurrentHeight-MinHeight) (1.0/(MaxHeight-MinHeight)), and realizing that the height of the vertex is mapped to a range between 0 and 1 and is consistent with the texture coordinates in the display memory.

The number TextureId of the texture and the texture coordinate texCoord of the vertex are input, and according to a formula color, texture1D (TextureId, texCoord), a sampling function is called in a fragment shader to obtain the vertex color from the texture.

S3, transmitting vertex data: calculating the size of the space and performing data transmission interaction;

as shown in fig. 2, further, the size of the space is calculated, a VBO object is created by using glGenBuffers, the type of the VBO is set to GL _ ARRAY _ BUFFER, vertex data is stored in the VBO and is provided for a vertex shader, a glBufferData function allocates a data storage space for the currently bound VBO, the vertex data in the memory is written into the space, and the last parameter in the glBufferData is GL _ STATIC _ DRAW, which indicates that data storage content is initialized only once, so as to facilitate space allocation of the GPU;

wherein, the size of the calculation space is as follows: the size SizeOfVertex of the space occupied by a single vertex is equal to the sum of the sizes occupied by all attributes of the vertex, and the number of rows of the vertex: rows, number of columns of vertices: cols, and obtains the space size sizefallvertex occupied by all the vertices according to the formulas numbervertex ═ Rows Cols and sizefallvertex ═ sizefertex ═ numbervertex.

Due to the large number of the vertices of the terrain and the use condition of the current video memory, allocation failure may be caused when a large continuous storage space is allocated for the VBO; the solution is to divide and multiplex data, divide a large block of data into a plurality of small blocks of data with the same capacity, transmit the small blocks of data to a video memory, and finally render. And allocating a small data space in the memory and recycling the small data space to realize memory multiplexing. Before data is written into the video memory, a plurality of VBOs are distributed in advance, and then small blocks of data are written into the corresponding VBOs, so that the condition that the vertex data is completely written into the VBOs is ensured, and the condition that large blocks of video memory are distributed and fail can be avoided.

As shown in fig. 3, specifically: setting the capacity of a single data block to be 6MB, and calculating the number of the top lines which can be accommodated according to the capacity of the data block;

calculating the number of required data blocks according to the total number of the vertexes, and creating VBOs with the same number;

allocating memory space according to the capacity of the data block, and traversing the vertex arrays;

the vertices holding the number of rows are read, the vertices are computed, and the vertices are transmitted in sequence.

S4, creating a shader: generating source codes of a vertex shader and a fragment shader by using a GLSL (global system level translation) language, and creating a corresponding executable logic unit in a GPU (graphics processing unit);

the method specifically comprises the following steps: the glGetShaderiv function creates a shader program, sets source codes of a vertex shader and a fragment shader through a glShaderSource function, and compiles the source codes according to a glComplieShader function;

the glAttachShader function binds the vertex shader and fragment shader to shader programs and connects the shader programs through the glLinProgram function.

The results of the execution may be obtained after the various stages of the shader creation are executed, such as compiling the shader source code using the glGetShaderiv function and the parameter GL _ COMPILE _ STATUS. By obtaining detailed information of the execution result, the detection of syntax or logic errors of the source code can be facilitated.

S5, rendering preparation: transmitting corresponding parameters to a shader before rendering, and transmitting the analyzed VBO data to a vertex shader;

further, as shown in fig. 4, the rendering preparation specifically includes:

running a shader program through a glUseProgram function, and starting an input attribute in the shader program;

setting a perspective matrix, an observation matrix and a model matrix variable corresponding to a vertex shader, and setting a corresponding illumination parameter and a texture number corresponding to a fragment shader;

and binding the VBO data into the video memory, transmitting the analyzed VBO data to a vertex shader, and associating the structure of the data with the input attribute by the vertex shader.

The method specifically comprises the following steps: since the viewing angle of the scene or the position of the illumination needs to be changed, the corresponding parameters are passed to the shader before rendering. Currently, the vertex shader does not know the structure of data in the VBO and informs the vertex shader of the way of parsing the VBO data. And calling drawing instructions glDrawElements for rendering, wherein the OpenGL executes a vertex shader and a fragment shader respectively, and outputs a rendering result to a window.

S6, rendering: and calling drawing instructions glDrawElements for rendering, respectively executing the created vertex shader and the created fragment shader by adopting a modern OpenGL rendering method, and outputting a rendering result to a window to display the three-dimensional digital earth.

Further, OpenGL uses a von-Lighting Model (Phong Lighting Model), which consists of three elements: the method comprises the following steps of Ambient (Ambient), Diffuse reflection (diffusion) and mirror surface (Specular), wherein the illumination color is white light, and the current application scene only adopts two illumination effects of Ambient and Diffuse reflection without mirror surface illumination. The intensity of the ambient and diffuse reflected illumination in the illumination parameters may be set.

The illumination calculation method comprises the following steps: input variables are: the method comprises the steps of calculating ambient light according to a formula Ambientcolor ═ light color @, ambient light intensity ambient, diffuse reflection intensity dispersion, Normal vector Normal, vertex position WorldPos, illumination position LightPos and vertex color colorObj; the illumination reversal is calculated from lightDir ═ normaize (lightPos-WorldPos), the diffuse reflection factor is calculated from diffuesector ═ dot (lightDir, Normal) (vector dot product), the diffuse reflection is calculated from diffuecolor ═ diffuefactor, and then the color is output from francolor ═ colorObj (AmbientColor + diffuecolor).

Traditional OpenGL adopts an immediate rendering mode, and rendering efficiency is low. Modern OpenGL is superior to traditional OpenGL in rendering efficiency, especially when the amount of data is larger, the advantage of efficiency is more obvious. The data volume of the three-dimensional terrain is large and reaches hundreds of megabits, so that the rendering index can be met only by adopting the modern OpenGL rendering mode.

Its graphics rendering pipeline receives a set of 3D coordinates and then converts them into a 2D pixel display output on the screen. The graphics rendering pipeline may be divided into several stages, each stage taking as input the output of the previous stage. All of these phases are highly specialized (they all have a specific function) and are easily performed in parallel. Due to their parallel execution nature, most graphics cards today have thousands of small processing cores that run separate applets on the GPU for each rendering stage to quickly process data in the graphics rendering pipeline. These applets are called shaders. The graphics rendering pipeline contains many sections, each of which will process a respective specific stage in the conversion of vertex data to final pixels, and the present invention will be generally explained for the various sections of the pipeline.

Therefore, modern OpenGL rendering methods specifically include:

firstly, transferring 3D coordinates in an array form as the input of a graphics rendering pipeline to represent a triangle, wherein the array is called Vertex Data (Vertex Data); vertex data is a series of Vertex sets, and a Vertex (Vertex) is a data set of 3D coordinates. The vertex data is expressed by vertex attributes (VertexAttribute), and may include any data that we want to use, such as 3D coordinates, color, and texture.

The first part of the graphics rendering pipeline is the Vertex Shader (Vertex Shader), which takes a single Vertex as input. The main purpose of the vertex shader is to convert 3D coordinates to another 3D coordinate, while vertex shading allows us some basic processing of vertex attributes.

In the Primitive assembling (primative Assembly) stage, all the vertexes output by the vertex shader are taken as input (if the vertex is GL _ Points, the vertex is one vertex), and all the Points are assembled into the shape of a specified Primitive; such as a triangle.

The output of the primitive assembly stage is passed to a Geometry Shader (Geometry Shader). The geometry shader takes as input the geometry of a series of vertices in the form of primitives, which can construct new (or other) primitives by producing new vertices to generate other constellations. In the example, it generates another triangle.

The output of the geometry Shader is passed to a Rasterization Stage (rasterisation Stage) where it maps the primitives to the corresponding pixels on the final screen, generating fragments (fragments) for use by a Fragment Shader (Fragment Shader). Clipping (Clipping) is performed before the fragment shader runs. Clipping discards all pixels beyond the view, which improves execution efficiency.

One segment in OpenGL is all data required by OpenGL to render a pixel, such as vertex coordinates, color, and the like. The main purpose of the fragment shader is to compute the final color of a pixel, which is also where the OpenGL advanced effects are produced. Typically, fragment shaders contain data of the 3D scene (such as lighting, shadows, textures, etc.) that can be used to compute the color of the final pixel.

After all corresponding color values are confirmed, the final object will be passed to the last phase, called the Alpha test and blend (Blending) phase. This stage detects the depth value of the fragment and the template Stencil value, and determines whether the pixel is behind or in front of other objects, and whether the pixel should be discarded. This stage will also check the alpha value (which defines the transparency of an object) and blend the objects (blend). Therefore, even if the color of one pixel output is calculated in the fragment shader, the final pixel color may be completely different when rendering multiple triangles.

The vertex shader and fragment shader in the OpenGL rendering module need to be self-defined.

Before graphics are drawn, vertex data must be input into OpenGL, and this data is managed by Vertex Buffer Objects (VBO), which will store a large number of vertices in the graphics memory of GPU, and the use of these Buffer Objects has the advantage that we can send a large amount of data to the graphics memory at one time instead of once for each vertex, sending data from memory to the graphics memory is relatively slow, and after sending data to the graphics memory, the vertex shader can access the vertex data, and the speed of local access is very fast. Allocating a large number of VBOs (one VBO having a capacity of only kilobytes) may cause graphics card driver problems. Some drivers can only allocate a certain number of VBOs from the video memory, regardless of the size of the VBOs, and need to place a smaller object in a VBO with a larger size.

The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于大数据和三维技术的水利工程信息管理系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!