Rendering method and device of virtual model, storage medium and electronic equipment

文档序号:248434 发布日期:2021-11-16 浏览:3次 中文

阅读说明:本技术 虚拟模型的渲染方法、装置、存储介质和电子设备 (Rendering method and device of virtual model, storage medium and electronic equipment ) 是由 王凯 赵海峰 于 2021-07-21 设计创作,主要内容包括:本发明公开了一种虚拟模型的渲染方法、装置、存储介质和电子设备。其中,该方法包括:获取待渲染模型对应的纹理数据;对纹理数据进行光栅化处理,得到待渲染模型所对应的体积渲染数据,其中,体积渲染数据用于对待渲染模型进行着色渲染;对体积渲染数据进行光线追踪计算,得到待渲染模型所对应的光影数据,其中,光影数据用于对待渲染模型的光影进行渲染;基于体积渲染数据以及光影数据对待渲染模型进行渲染,得到目标模型。本发明解决了现有技术中在对虚拟模型进行光影渲染时无法得到正确的光影效果以及空间关系的技术问题。(The invention discloses a rendering method and device of a virtual model, a storage medium and electronic equipment. Wherein, the method comprises the following steps: acquiring texture data corresponding to a model to be rendered; performing rasterization processing on the texture data to obtain volume rendering data corresponding to the model to be rendered, wherein the volume rendering data is used for performing coloring rendering on the model to be rendered; performing ray tracing calculation on the volume rendering data to obtain light and shadow data corresponding to the model to be rendered, wherein the light and shadow data are used for rendering the light and shadow of the model to be rendered; and rendering the model to be rendered based on the volume rendering data and the shadow data to obtain the target model. The method and the device solve the technical problem that correct shadow effects and spatial relationships cannot be obtained when shadow rendering is carried out on the virtual model in the prior art.)

1. A method for rendering a virtual model, comprising:

acquiring texture data corresponding to a model to be rendered;

rasterizing the texture data to obtain volume rendering data corresponding to the model to be rendered, wherein the volume rendering data is used for performing coloring rendering on the model to be rendered;

performing ray tracing calculation on the volume rendering data to obtain light and shadow data corresponding to the model to be rendered, wherein the light and shadow data are used for rendering the light and shadow of the model to be rendered;

rendering the model to be rendered based on the volume rendering data and the shadow data to obtain a target model.

2. The method of claim 1, wherein the volume rendering data comprises at least one of: density data and temperature data corresponding to the model to be rendered; the shadow data comprises at least one of: and density data and temperature data corresponding to the shadow of the model to be rendered.

3. The method of claim 1, wherein obtaining texture data corresponding to a model to be rendered comprises:

reading animation data to be played in a game scene, wherein the animation data consists of a plurality of frames of texture images;

the texture data is extracted from each frame of texture image included in the animation data.

4. The method of claim 3, wherein after obtaining texture data corresponding to the model to be rendered, the method further comprises:

and storing the texture data corresponding to each frame of texture image according to the display sequence of each frame of texture image to obtain preset files, wherein each preset file stores the texture data corresponding to the texture image of the current frame.

5. The method according to claim 4, wherein after storing the texture data corresponding to each frame of texture image according to the display order of each frame of texture image to obtain a preset file, the method further comprises:

reading the texture data from a preset file corresponding to the texture image of the current frame;

converting the texture data into a binary file;

extracting density data and temperature data corresponding to the texture data from the binary file;

and storing the density data into a first color channel corresponding to the texture data, and storing the temperature data into a second color channel corresponding to the texture data to obtain three-dimensional texture data corresponding to the texture data.

6. The method according to claim 1, wherein before rasterizing the texture data to obtain the volume rendering data corresponding to the model to be rendered, the method further comprises:

acquiring a rendering mark corresponding to the current rendering stage;

and determining a rendering algorithm corresponding to the current rendering stage according to the rendering mark.

7. The method of claim 6, wherein determining the rendering algorithm corresponding to the current rendering stage according to the rendering flag comprises:

when the rendering mark is determined to be a first mark, determining that a rendering algorithm corresponding to the current rendering stage is a rasterization algorithm, wherein the rasterization algorithm is used for performing rasterization processing on the texture data, and the first mark represents that the current rendering stage performs rendering on the model to be rendered;

when the rendering mark is determined to be a second mark, determining that the rendering algorithm corresponding to the current rendering stage is a ray tracing algorithm, wherein the ray tracing algorithm is used for carrying out ray tracing calculation on the model to be rendered, and the second mark represents that the current rendering stage carries out rendering on the shadow of the model to be rendered.

8. The method according to claim 1, wherein rasterizing the texture data to obtain volume rendering data corresponding to the model to be rendered comprises:

acquiring a sight line path corresponding to a virtual camera in a game scene;

sampling the line-of-sight path to obtain a plurality of viewpoints corresponding to the line-of-sight path;

calculating a distance between a light source and each viewpoint in the game scene;

determining illumination data corresponding to each viewpoint according to the distance;

and performing coloring rendering on the model to be rendered according to the illumination data to obtain the volume rendering data.

9. The method of claim 8, wherein rendering the model to be rendered in a rendering manner according to the lighting data to obtain the volume rendering data comprises:

performing accumulation operation on the illumination data in a sight line direction corresponding to the sight line path to obtain a target density corresponding to the model to be rendered;

accumulating the illumination data in an illumination direction corresponding to an illumination path of the light source to obtain a target temperature corresponding to the model to be rendered;

and rendering the model to be rendered according to the target density and the target temperature to obtain the volume rendering data.

10. The method of claim 9, wherein performing ray tracing calculation on the volume rendering data to obtain shadow data corresponding to the model to be rendered comprises:

determining a light and shadow area corresponding to the model to be rendered according to the illumination direction;

determining density data corresponding to the shadow area;

and rendering the shadow area according to the density data corresponding to the shadow area to obtain shadow data corresponding to the model to be rendered.

11. The method of claim 10, wherein determining the shadow region corresponding to the model to be rendered according to the illumination direction comprises:

determining a projection pixel in the model to be rendered and a position coordinate corresponding to the projection pixel according to the illumination path;

and determining the initial position of the shadow area according to the position coordinates.

12. The method of claim 11, wherein determining the projection pixel in the model to be rendered according to the illumination path comprises:

obtaining a distance field corresponding to the light source;

determining a target distance from the distance field that is closest to the model to be rendered;

sampling the illumination path to obtain a plurality of illumination points;

determining a target illumination point from the plurality of illumination points according to the illumination direction corresponding to the light source and the target distance;

and when the distance between the target illumination point and the model to be rendered is smaller than a preset value, determining the position of the target illumination point on the model to be rendered as the projection pixel.

13. The method of claim 10, wherein determining the density data corresponding to the shadow region comprises:

accumulating the pixel values corresponding to the shadow area in the illumination direction to obtain density data corresponding to the shadow area, wherein the density data represents transparency information of the shadow area.

14. The method according to claim 1, wherein after rasterizing the texture data to obtain the volume rendering data corresponding to the model to be rendered, the method further comprises:

adjusting the volume rendering data based on a preset playing component to obtain adjusted volume rendering data;

rendering the model to be rendered based on the adjusted volume rendering data, and displaying color information corresponding to the rendered model to be rendered.

15. The method according to claim 14, wherein a first parameter, a second parameter and a third parameter are set on the playing component, wherein the first parameter is used for specifying the volume rendering data, the second parameter is used for specifying material data for rendering the model to be rendered, the third parameter is used for specifying other attribute data, and the other attribute data is used for playing the volume rendering data and the light and shadow data.

16. An apparatus for rendering a virtual model, comprising:

the acquisition module is used for acquiring texture data corresponding to the model to be rendered;

the processing module is used for rasterizing the texture data to obtain volume rendering data corresponding to the model to be rendered, wherein the volume rendering data is used for rendering the model to be rendered;

the calculation module is used for performing ray tracing calculation on the volume rendering data to obtain light and shadow data corresponding to the model to be rendered, wherein the light and shadow data are used for rendering the light and shadow of the model to be rendered;

and the rendering module is used for rendering the model to be rendered based on the volume rendering data and the shadow data to obtain a target model.

17. A storage medium having stored thereon a computer program, wherein the computer program is arranged to execute a method of rendering a virtual model as claimed in any one of claims 1 to 15 when executed.

18. An electronic device, wherein the electronic device comprises one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method for running a program, wherein the program is arranged to perform the method for rendering a virtual model as claimed in any one of claims 1 to 15 when run.

Technical Field

The invention relates to the field of computer graphic rendering, in particular to a rendering method and device of a virtual model, a storage medium and electronic equipment.

Background

In computer graphics, it is generally necessary to render virtual models, for example, in the field of games, virtual characters, virtual terrain, and the like in games. In the prior art, special effects such as smoke, flame, etc. are usually rendered in a real-time rendering engine (e.g., unknown, Unity, etc.) by using a particle system, wherein the particle system usually uses a billboard technology to make a patch with a smoke sequence texture face a camera, so as to render smoke, flame, etc., such as the smoke special effect rendered by the billboard technology shown in fig. 1, fig. 2 is a schematic diagram of the smoke sequence texture, and in fig. 2, a patch of each smoke sequence texture faces the camera.

The rendering technology of the billboard technology has high efficiency and high speed, but the shadow effect of the virtual model is obtained through a pre-rendering mode through the technology, so that the correct shadow effect cannot be obtained. In addition, since this technique is implemented by patch rendering, there is also no correct spatial relationship, and therefore, significant flaws are generated when smoke, flames, or the like intersect with the virtual model. For example, in the schematic diagram of the virtual model intersecting a flame shown in FIG. 3, there is a flaw where the flame intersects the virtual model.

In addition, in the prior art, the volume effect simulated by special effect software (for example, Houdini) can be converted into a polygonal mesh model, and then the polygonal mesh model is led into a rendering engine for rendering. For example, fig. 4 is a cloud model obtained by rendering with Houdini special effect software, and fig. 5 is a polygonal mesh model corresponding to the cloud model.

Although the correct volume space relation can be obtained by a mode of simulating the volume effect through special effect software, the problem of rendering flaws existing when the virtual model is intersected with special effects such as flames, cloud mist and the like cannot be correctly processed.

In view of the above problems, no effective solution has been proposed.

Disclosure of Invention

The embodiment of the invention provides a rendering method and device of a virtual model, a storage medium and electronic equipment, and at least solves the technical problem that correct shadow effect and spatial relation cannot be obtained when shadow rendering is carried out on the virtual model in the prior art.

According to an aspect of an embodiment of the present invention, there is provided a rendering method of a virtual model, including: acquiring texture data corresponding to a model to be rendered; performing rasterization processing on the texture data to obtain volume rendering data corresponding to the model to be rendered, wherein the volume rendering data is used for performing coloring rendering on the model to be rendered; performing ray tracing calculation on the volume rendering data to obtain light and shadow data corresponding to the model to be rendered, wherein the light and shadow data are used for rendering the light and shadow of the model to be rendered; and rendering the model to be rendered based on the volume rendering data and the shadow data to obtain the target model.

Further, the volume rendering data comprises at least one of: density data and temperature data corresponding to the model to be rendered; the shadow data includes at least one of: density data and temperature data corresponding to the shadow of the model to be rendered.

Further, the rendering method of the virtual model further comprises: reading animation data to be played in a game scene, wherein the animation data consists of a plurality of frames of texture images; texture data is extracted from each frame of texture image included in the animation data.

Further, the rendering method of the virtual model further comprises: after texture data corresponding to a model to be rendered is obtained, storing the texture data corresponding to each frame of texture image according to the display sequence of each frame of texture image to obtain preset files, wherein the texture data corresponding to the texture image of the current frame is stored in each preset file.

Further, the rendering method of the virtual model further comprises: after texture data corresponding to each frame of texture image is stored according to the display sequence of each frame of texture image to obtain a preset file, reading the texture data from the preset file corresponding to the texture image of the current frame; converting the texture data into a binary file; extracting density data and temperature data corresponding to the texture data from the binary file; and storing the density data into a first color channel corresponding to the texture data, and storing the temperature data into a second color channel corresponding to the texture data to obtain three-dimensional texture data corresponding to the texture data.

Further, the rendering method of the virtual model further comprises: before rasterization processing is carried out on the texture data to obtain volume rendering data corresponding to a model to be rendered, a rendering mark corresponding to a current rendering stage is obtained; and determining a rendering algorithm corresponding to the current rendering stage according to the rendering mark.

Further, the rendering method of the virtual model further comprises: when the rendering mark is determined to be the first mark, determining that a rendering algorithm corresponding to the current rendering stage is a rasterization algorithm, wherein the rasterization algorithm is used for performing rasterization processing on texture data, and the first mark represents that the current rendering stage performs rendering on a model to be rendered; and when the rendering mark is determined to be the second mark, determining that the rendering algorithm corresponding to the current rendering stage is a ray tracing algorithm, wherein the ray tracing algorithm is used for carrying out ray tracing calculation on the model to be rendered, and the second mark represents the shadow of the model to be rendered in the current rendering stage for rendering.

Further, the rendering method of the virtual model further comprises: acquiring a sight line path corresponding to a virtual camera in a game scene; sampling the line-of-sight path to obtain a plurality of viewpoints corresponding to the line-of-sight path; calculating the distance between a light source and each viewpoint in a game scene; determining illumination data corresponding to each viewpoint according to the distance; and performing coloring rendering on the model to be rendered according to the illumination data to obtain volume rendering data.

Further, the rendering method of the virtual model further comprises: accumulating the illumination data in the sight line direction corresponding to the sight line path to obtain the target density corresponding to the model to be rendered; accumulating the illumination data in the illumination direction corresponding to the illumination path of the light source to obtain a target temperature corresponding to the model to be rendered; and rendering the model to be rendered according to the target density and the target temperature to obtain volume rendering data.

Further, the rendering method of the virtual model further comprises: determining a light and shadow area corresponding to the model to be rendered according to the illumination direction; determining density data corresponding to the shadow area; and rendering the shadow area according to the density data corresponding to the shadow area to obtain shadow data corresponding to the model to be rendered.

Further, the rendering method of the virtual model further comprises: determining a projection pixel in the model to be rendered and a position coordinate corresponding to the projection pixel according to the illumination path; and determining the initial position of the shadow area according to the position coordinates.

Further, the rendering method of the virtual model further comprises: obtaining a distance field corresponding to a light source; determining a target distance closest to the model to be rendered from the distance field; sampling an illumination path to obtain a plurality of illumination points; determining a target illumination point from the plurality of illumination points according to the illumination direction corresponding to the light source and the target distance; and when the distance between the target illumination point and the model to be rendered is smaller than a preset value, determining the position of the target illumination point on the model to be rendered as a projection pixel.

Further, the rendering method of the virtual model further comprises: and accumulating the pixel values corresponding to the shadow area in the illumination direction to obtain density data corresponding to the shadow area, wherein the density data represents transparency information of the shadow area.

Further, the rendering method of the virtual model further comprises: after the texture data is subjected to rasterization processing to obtain volume rendering data corresponding to a model to be rendered, the volume rendering data is adjusted based on a preset playing component to obtain adjusted volume rendering data; rendering the model to be rendered based on the adjusted volume rendering data, and displaying the color information corresponding to the rendered model to be rendered.

Furthermore, a first parameter, a second parameter and a third parameter are set on the playing component, wherein the first parameter is used for specifying volume rendering data, the second data is used for specifying material data for rendering the model to be rendered, the third parameter is used for specifying other attribute data, and the other attribute data is used for playing the volume rendering data and the shadow data.

According to another aspect of the embodiments of the present invention, there is also provided a rendering apparatus of a virtual model, including: the system comprises an acquisition module, a rendering module and a rendering module, wherein the acquisition module is used for acquiring texture data corresponding to a model to be rendered, and the volume rendering data is used for rendering the model to be rendered in a coloring manner; the processing module is used for carrying out rasterization processing on the texture data to obtain volume rendering data corresponding to the model to be rendered; the calculation module is used for performing ray tracing calculation on the volume rendering data to obtain light and shadow data corresponding to the model to be rendered, wherein the light and shadow data are used for rendering the light and shadow of the model to be rendered; and the rendering module is used for rendering the model to be rendered based on the volume rendering data and the shadow data to obtain the target model.

According to another aspect of the embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned rendering method of the virtual model when running.

According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a rendering method for executing a program, wherein the program is arranged to perform the above-described rendering method of a virtual model when executed.

In the embodiment of the invention, a method of combining a rasterization technology and a ray tracing technology is adopted, after texture data corresponding to a model to be rendered is obtained, volume rendering data corresponding to the model to be rendered is obtained by rasterizing the texture data, wherein the volume rendering data is used for rendering the model to be rendered in a coloring mode, then ray tracing calculation is carried out on the volume rendering data, and shadow data corresponding to the model to be rendered is obtained, wherein the shadow data is used for rendering a shadow of the model to be rendered, and finally, the model to be rendered is rendered based on the volume rendering data and the shadow data, so that a target model is obtained.

In the process, the texture data is subjected to rasterization processing, so that accurate volume rendering can be performed on the model to be rendered, and an accurate volume rendering effect can be obtained. In addition, in the application, a ray tracing technology is also used for determining the light and shadow effect corresponding to the model to be rendered, so that a target rendering model obtained by rendering the model to be rendered has an accurate spatial relationship and a correct light and shadow effect.

Therefore, the scheme provided by the application achieves the purpose of performing shadow rendering on the model to be rendered, so that the rendered model has the correct shadow effect and the technical effect of the spatial relationship, and the technical problem that the correct shadow effect and the spatial relationship cannot be obtained when the shadow rendering is performed on the virtual model in the prior art is solved.

Drawings

The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:

FIG. 1 is a schematic illustration of a smoke effect according to the prior art;

FIG. 2 is a schematic illustration of a smoke sequence texture according to the prior art;

FIG. 3 is a schematic illustration of a virtual model intersecting a flame according to the prior art;

FIG. 4 is a schematic diagram of a cloud model according to the prior art;

FIG. 5 is a schematic diagram of a polygonal mesh model according to the prior art;

FIG. 6 is a flow chart of a method for rendering a virtual model according to an embodiment of the invention;

FIG. 7 is a diagram illustrating results of an alternative smoke rendering according to an embodiment of the present invention;

FIG. 8 is a diagram illustrating results of an alternative smoke rendering according to an embodiment of the present invention;

FIG. 9 is a schematic diagram of the results of an alternative flame rendering according to embodiments of the invention;

FIG. 10 is a schematic diagram of the results of an alternative flame rendering according to embodiments of the invention;

FIG. 11 is a schematic illustration of an alternative object file according to an embodiment of the invention;

FIG. 12 is a schematic diagram of an alternative special effects data asset object according to an embodiment of the invention;

FIG. 13 is an expanded view of an alternative cube texture according to embodiments of the invention;

FIG. 14 is a flow diagram of an alternative rendering of a model to be rendered in accordance with an embodiment of the present invention;

FIG. 15 is a schematic diagram of an alternative rasterization algorithm in accordance with embodiments of the present invention;

FIG. 16 is a schematic illustration of an alternative determination of shadow data according to an embodiment of the invention;

FIG. 17 is a schematic diagram of an alternative projection effect according to an embodiment of the invention;

figure 18 is a schematic diagram of an alternative distance field algorithm according to an embodiment of the invention;

FIG. 19 is a diagram illustrating projection effects of an alternative model to be rendered according to an embodiment of the present invention;

FIG. 20 is a diagram of a rendering apparatus for a virtual model according to an embodiment of the present invention.

Detailed Description

In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

In accordance with one embodiment of the present invention, there is provided an embodiment of a method for rendering a virtual model, where the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions, and where a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that described herein.

In addition, it should be further noted that the rendering system for rendering the virtual model may be an execution main body of the method provided in this embodiment, where the rendering system may be a terminal device (e.g., a computer, a smart phone, a tablet, and the like), the rendering system may also be a server, the server may be an entity server, or may also be a cloud server, and for example, the method provided in this embodiment may be run in the cloud server, and after the cloud server completes rendering the virtual model, the target virtual model obtained after rendering is pushed to the terminal device to display the target virtual model.

Fig. 6 is a flowchart of a rendering method of a virtual model according to an embodiment of the present invention, as shown in fig. 6, the method includes the following steps:

step S602, texture data corresponding to the model to be rendered is obtained.

In step S602, the texture data corresponding to the model to be rendered may be volume data, where the volume data is three-dimensional space data and may be stored in a three-dimensional texture to simulate special effects such as smoke and flame. The model to be rendered may be, but is not limited to, a virtual model in a game scene, such as trees, stones, virtual buildings (e.g., pillboxes, buildings), airplanes, cars, smoke, flames, clouds, and the like. The texture data can be stored in the picture, namely, the rendering system can acquire the texture data through the texture image; the texture data may also be stored in a preset file, that is, the rendering system may obtain the texture data by reading the data stored in the preset file.

It should be noted that, in practical applications, texture data used for rendering different rendering models may be different, for example, texture data used for rendering a stone is different from texture data used for rendering data.

In an optional embodiment, the rendering system first determines a model type corresponding to a model to be rendered, and then reads texture data corresponding to the model type from a first storage area, where texture data of different model types are stored in the first storage area. If texture data corresponding to the model type does not exist in the first storage area, the rendering system crawls the Internet through a crawler to acquire the texture data corresponding to the model type.

In another alternative embodiment, the rendering system may further respond to an operation instruction of the user, for example, respond to a reading instruction for reading data input by the user, parse the reading instruction to obtain a second storage area for storing texture data, and then read the texture data from the second storage area.

Step S604, performing rasterization processing on the texture data to obtain volume rendering data corresponding to the model to be rendered.

In step S604, the process of rasterizing the texture data is a process of converting the texture data into slices, where each element in a slice corresponds to one pixel in the frame buffer. Optionally, the rendering system may use a rasterized Ray Marching algorithm to implement rasterization processing on the texture data.

In addition, in step S604, the volume rendering data is used for rendering the model to be rendered in a rendering manner, and the volume rendering data includes at least one of the following data: density data and temperature data corresponding to the model to be rendered.

It should be noted that, rendering the model to be rendered is performed through the volume rendering data obtained by rasterizing the texture data, so that the accuracy and correctness of the spatial relationship of volume rendering can be ensured.

Step S606, performing ray tracing calculation on the volume rendering data to obtain light and shadow data corresponding to the model to be rendered.

In step S606, ray tracing is a general technique from geometric optics that obtains a model of the path traveled by a ray by tracing the ray as it interacts with an optical surface. Optionally, the shadow data is used to render a shadow of the model to be rendered, and the shadow data includes at least one of the following: density data and temperature data corresponding to the shadow of the model to be rendered.

It should be noted that, in step S606, the ray tracing algorithm is used to calculate the shadow data corresponding to the volume rendering data, so that when the shadow data is used to perform the shadow rendering on the model to be rendered, a correct shadow effect can be obtained.

Step S608, rendering the model to be rendered based on the volume rendering data and the shadow data to obtain the target model.

In step S608, the rendering system renders the model to be rendered by using the volume rendering data and the light and shadow data, and can obtain a light and shadow effect corresponding to the model to be rendered, for example, as can be seen from the result schematic diagrams of smoke rendering shown in fig. 7 and 8, the scheme of the present application can obtain the light and shadow and projection of smoke, and the interpenetration between virtual models (for example, between a virtual sphere and smoke in fig. 8) can also obtain a correct expression.

In addition, in the present application, not only the rendering of the static volume effect but also the rendering of the dynamically changing volume effect can be realized, for example, in the result diagram of the flame rendering shown in fig. 9 and 10, the temperature data of the flame model is rendered, and the rendering result can accurately represent the color of the flame.

Based on the schemes defined in the foregoing steps S602 to S608, it can be known that, in the embodiment of the present invention, after texture data corresponding to a model to be rendered is obtained by using a combination of a rasterization technique and a ray tracing technique, volume rendering data corresponding to the model to be rendered is obtained by performing rasterization on the texture data, then ray tracing calculation is performed on the volume rendering data to obtain shadow data corresponding to the model to be rendered, and finally, the model to be rendered is rendered based on the volume rendering data and the shadow data to obtain a target model.

It is easy to note that, in the above process, the texture data is rasterized, so that the model to be rendered can be accurately rendered in a volume, and an accurate volume rendering effect can be obtained. In addition, in the application, a ray tracing technology is also used for determining the light and shadow effect corresponding to the model to be rendered, so that a target rendering model obtained by rendering the model to be rendered has an accurate spatial relationship and a correct light and shadow effect.

Therefore, the scheme provided by the application achieves the purpose of performing shadow rendering on the model to be rendered, so that the rendered model has the correct shadow effect and the technical effect of the spatial relationship, and the technical problem that the correct shadow effect and the spatial relationship cannot be obtained when the shadow rendering is performed on the virtual model in the prior art is solved.

In an optional embodiment, in the process of obtaining texture data corresponding to the model to be rendered, the rendering system first reads animation data to be played in a game scene, and extracts the texture data from each frame of texture image included in the animation data. After texture data corresponding to a model to be rendered is obtained, storing the texture data corresponding to each frame of texture image according to the display sequence of each frame of texture image to obtain preset files, wherein the texture data corresponding to the texture image of the current frame is stored in each preset file. Wherein the animation data is composed of a plurality of frames of texture images.

Optionally, the rendering system may customize a special plug-in special effect software (e.g., Houdini) to read texture data corresponding to each frame of texture image and save the texture data into the object file. The target file may include two files, namely a description file and a data file (i.e., the preset file mentioned above), wherein the description file describes a sequence corresponding to animation data to be played in a text format (e.g., an XML format), and the description file stores at least a total number of frames corresponding to the animation data to be played, a file name of a start frame, a maximum density, and a maximum temperature; the data file is used for storing texture data in a binary format, and the texture data stores resolution, a spatial transformation matrix, density data corresponding to a model to be rendered, and temperature data. The data file is in the form of a sequence file, and each file stores texture data corresponding to a current frame. For example, in the schematic diagram of the destination file shown in fig. 11, the volumedesc.fxd file is the description file, and the volumedata001.vlb, the volumedata002.vlb, and the volumedata003.vlb are the data files.

It should be noted that, the user may modify the description file through the rendering system, for example, the user modifies the start frame and the end frame corresponding to the animation data to be played through the rendering system. In addition, setting the maximum density and the maximum temperature in the description file enables the density value and the temperature value corresponding to the model to be rendered to be set within a preset range (e.g., 0 to 1) so as to facilitate later processing.

Further, after texture data corresponding to each frame of texture image is stored according to the display sequence of each frame of texture image to obtain a preset file, the rendering system reads the texture data from the preset file corresponding to the texture image of the current frame and converts the texture data into a binary file, then density data and temperature data corresponding to the texture data are extracted from the binary file, finally, the density data are stored into a first color channel corresponding to the texture data, and the temperature data are stored into a second color channel corresponding to the texture data to obtain three-dimensional texture data corresponding to the texture data.

Optionally, the format corresponding to the binary file is shown in table 1:

TABLE 1

Data type Byte size
Resolution ratio 3 unsigned integer values, 12 bytes
Spatial transformation matrix 16 floating point values, 64 bytes
Density data Resolution size x4 bytes
Temperature data Resolution size x4 bytes

It should be noted that the texture data occupies a relatively large space, and when the texture data is directly processed, a large amount of system memory may be occupied. In this embodiment, the texture data is converted into the binary file, and because the data format of the binary file is more compact and the file is smaller, additional data, such as a space transformation matrix, can be stored, so that the system memory occupied by processing the texture data is reduced, the system overhead is reduced, and the flexibility of controlling the game engine can be improved when the texture data is imported into the game engine.

In addition, after the texture data is converted into a binary file from the special effect software, a game engine (e.g., unregeal) may import the binary file, and generate an unregeal resource file from the texture data included in the binary file. Specifically, the game engine creates a special effect data asset object for the sequence corresponding to the entire animation data, and the object can store the sequence description corresponding to the entire animation data and organize the texture data corresponding to each frame of texture image into an expanded cube texture. Fig. 12 shows contents included in the above-mentioned special effect data asset object, and as can be seen from fig. 12, the special effect data asset object includes volume description data and volume object data (i.e. the above-mentioned texture data), the volume object data at least includes a size, a spatial transform matrix, and a volume texture array, and the volume texture array includes volume data corresponding to each frame of image (e.g. single-frame volume data 01, single-frame volume data 02, and single-frame volume data 03 in fig. 12). Fig. 13 is an expanded view of the cubic texture described above, in which each black frame represents texture data corresponding to one frame of the texture image.

In addition, the first color channel may be an R channel, and the second color channel may be a G channel, that is, the density data and the texture data are stored in the R channel and the G channel, respectively. In order to facilitate the importing of data, the factory "UFXDataAssetFactory" of the generated object, which can be used to convert the texture data derived in the special effects software into a "UFXDataAsset" object, which is a data format that can be directly recognized by the game engine, is also customized in this application by the rendering system.

In an optional embodiment, before performing rasterization processing on the texture data to obtain volume rendering data corresponding to the model to be rendered, the rendering system further obtains a rendering marker corresponding to the current rendering stage, and determines a rendering algorithm corresponding to the current rendering stage according to the rendering marker. When the rendering mark is determined to be the first mark, determining that a rendering algorithm corresponding to the current rendering stage is a rasterization algorithm; when the rendering mark is determined to be the second mark, determining that the rendering algorithm corresponding to the current rendering stage is a ray tracing algorithm, wherein the rasterization algorithm is used for rasterizing texture data, the first mark represents that the current rendering stage renders the model to be rendered in a coloring mode, the ray tracing algorithm is used for carrying out ray tracing calculation on the model to be rendered, and the second mark represents that the current rendering stage renders the shadow of the model to be rendered.

Optionally, the rendering system sets a mark (i.e., the rendering mark) at different rendering stages of the game engine, where the main rendering and the shadow rendering respectively correspond to different rendering marks, the rendering mark of the main rendering is a first mark, and the rendering mark of the shadow rendering is a second mark. For example, fig. 14 shows a flowchart for rendering the model to be rendered, and as can be seen from fig. 14, the rendering system determines the rendering algorithm used in the current rendering stage according to different rendering markers.

It should be noted that the main rendering is mainly used for rendering the model to be rendered in a coloring manner, for example, rendering the model to be rendered in colors and shades; and the shadow rendering is used for rendering the projection of the model to be rendered. That is, in the present application, the color rendering and the projection rendering are performed separately. In addition, different rendering algorithms are used for different rendering stages, for example, in the embodiment, a rasterization algorithm is used to perform rendering on the model to be rendered, and a ray tracing algorithm is used to perform rendering on the projection of the model to be rendered.

In an optional embodiment, after the texture data is obtained, the rendering system performs rasterization processing on the texture data to obtain volume rendering data corresponding to the model to be rendered. Specifically, the rendering system firstly obtains a line-of-sight path corresponding to a virtual camera in a game scene, samples the line-of-sight path to obtain a plurality of viewpoints corresponding to the line-of-sight path, then calculates the distance between a light source in the game scene and each viewpoint, determines illumination data corresponding to each viewpoint according to the distance, and finally performs rendering on the model to be rendered according to the illumination data to obtain volume rendering data.

In the process of performing coloring rendering on the model to be rendered according to the illumination data to obtain volume rendering data, the rendering system performs accumulation operation on the illumination data in the sight line direction corresponding to the sight line path to obtain target density corresponding to the model to be rendered, performs accumulation operation on the illumination data in the illumination direction corresponding to the illumination path of the light source to obtain target temperature corresponding to the model to be rendered, and finally performs rendering on the model to be rendered according to the target density and the target temperature to obtain the volume rendering data.

Optionally, the rendering system first samples a line-of-sight path corresponding to the virtual camera to obtain a plurality of viewpoints, for example, in the schematic diagram of the rasterization algorithm shown in fig. 15, a dotted line represents the line-of-sight path, and each point (e.g., a black point and a white point in fig. 15) on the dotted line represents a viewpoint. And then, calculating illumination data corresponding to each viewpoint along the line-of-sight path, and accumulating the illumination data of each viewpoint on the line-of-sight path to obtain volume rendering data. Wherein the illumination data corresponding to each viewpoint is determined according to the distance between the light source and the viewpoint.

It should be noted that, in the process of performing volume rendering on the model to be rendered through the volume rendering data, the volume rendering data is gradually increased along the light direction, and the texture data is sampled according to the position where the light reaches the model to be rendered, so as to obtain the density data and the temperature data of the current position on the model to be rendered.

Optionally, the rendering system may accumulate the illumination density by the following formula:

in the above equation, linear (x ', x) represents a target density obtained by performing an accumulation operation on the illumination density, opacity(s) represents an illumination density corresponding to the viewpoint s, and x' represent an upper limit value and a lower limit value of a viewpoint range of a viewpoint on a certain viewpoint path, respectively.

It should be noted that the illumination density is accumulated, and the essence of the accumulated illumination density is a process of integrating the opacity of the current line-of-sight path to obtain the linear density.

In addition, it should be noted that the process of performing the accumulation operation on the illumination temperature is similar to the process of performing the accumulation operation on the illumination density, and is not described herein again.

In an optional embodiment, after performing rasterization processing on the texture data to obtain volume rendering data corresponding to the model to be rendered, the rendering system further adjusts the volume rendering data based on a preset playing component to obtain adjusted volume rendering data, renders the model to be rendered based on the adjusted volume rendering data, and displays color information corresponding to the rendered model to be rendered. The playing component is provided with a first parameter, a second parameter and a third parameter, wherein the first parameter is used for specifying volume rendering data, the second data is used for specifying material data for rendering a model to be rendered, the third parameter is used for specifying other attribute data, and the other attribute data is used for playing the volume rendering data and the shadow data.

In an optional embodiment, in the process of performing ray tracing calculation on the volume rendering data to obtain light and shadow data corresponding to the model to be rendered, the rendering system determines a light and shadow area corresponding to the model to be rendered according to the illumination direction, determines density data corresponding to the light and shadow area, and then renders the light and shadow area according to the density data corresponding to the light and shadow area to obtain the light and shadow data corresponding to the model to be rendered.

It should be noted that, in the process of obtaining the shadow data, the rendering system accumulates the pixel values corresponding to the shadow area in the illumination direction to obtain density data corresponding to the shadow area, where the density data represents transparency information of the shadow area. In order to render a plurality of models to be rendered (for example, smoke and spheres in fig. 7) having a cross relationship, and obtain an accurate projection effect, in this embodiment, a shadow area corresponding to the model to be rendered is determined according to the position coordinates of the projection pixels.

Specifically, the rendering system determines a projection pixel in the model to be rendered and a position coordinate corresponding to the projection pixel according to the illumination path, and then determines an initial position of the shadow area according to the position coordinate. The rendering system obtains a distance field corresponding to a light source in the process of determining a projection pixel in a model to be rendered according to an illumination path, determines a target distance closest to the model to be rendered from the distance field, samples the illumination path to obtain a plurality of illumination points, determines a target illumination point from the plurality of illumination points according to an illumination direction and the target distance corresponding to the light source, and finally determines the position of the target illumination point on the model to be rendered as the projection pixel when the distance between the target illumination point and the model to be rendered is smaller than a preset value.

Optionally, in the determination diagram of the shadow data shown in fig. 16, a four-sided frame indicates a range area of a rendering effect after rendering the model to be rendered, for example, the four-sided frame may indicate a range area of the smoke in fig. 7. The black dots represent the positions of the pixels to be colored, i.e., the starting positions of the shadow areas, and the white dots represent the starting and ending points of the intersection of the light ray with the smoke, i.e., the opacity corresponding to the smoke is the accumulation of the illumination data between the starting and ending points as described above. Fig. 17 is a schematic view illustrating a projection effect of the model to be rendered after shadow rendering, and it can be seen from fig. 17 that both the model to be rendered and smoke can obtain a correct projection effect.

It should be noted that the distance field can also be used to perform shadow rendering on the model to be rendered. Optionally, as shown in the distance field algorithm diagram of fig. 18, when a ray travels forward, every time an end point is reached, the rendering system queries a distance closest to the model to be rendered in the distance field, and then proceeds a further step according to the distance until the distance value becomes 0 or a negative value (and a preset value), which indicates that the ray has intersected with the model to be rendered in the scene, otherwise, it is determined that the ray has not intersected with the model to be rendered. Wherein the intersection represents that the model to be rendered is occluded, and a projection is generated; the disjoint representation shows that the model to be rendered is not occluded and does not produce projections. Fig. 19 is a schematic diagram illustrating a projection effect of the model to be rendered, and as can be seen from fig. 19, the model to be rendered obtains a correct projection effect on the volume.

According to the content, the texture data generated by the special effect software is directly converted into the self-defined binary file format and is imported into the game engine to generate the three-dimensional texture supported by the game engine. During rendering, the volume rendering of the model to be rendered is correctly processed by adopting a rasterization algorithm, and the projection of the volume effect is calculated by utilizing a ray tracing algorithm. The method can well process the spatial relation of volume rendering and can obtain correct light and shadow.

According to an embodiment of the present invention, there is further provided an embodiment of a rendering apparatus for a virtual model, where fig. 20 is a schematic diagram of the rendering apparatus for a virtual model according to the embodiment of the present invention, and as shown in fig. 20, the apparatus includes: an acquisition module 2001, a processing module 2003, a calculation module 2005, and a rendering module 2007.

The obtaining module 2001 is configured to obtain texture data corresponding to a model to be rendered; the processing module 2003 is configured to perform rasterization processing on the texture data to obtain volume rendering data corresponding to the model to be rendered; the calculating module 2005 is configured to perform ray tracing calculation on the volume rendering data to obtain shadow data corresponding to the model to be rendered; a rendering module 2007, configured to render the model to be rendered based on the volume rendering data and the shadow data, so as to obtain the target model.

It should be noted that the acquiring module 2001, the processing module 2003, the calculating module 2005, and the rendering module 2207 correspond to steps S602 to S608 in the above embodiment, and the four modules are the same as the corresponding steps in the implementation example and application scenarios, but are not limited to the disclosure in the above embodiment.

Optionally, the volume rendering data is used to perform rendering for rendering the model to be rendered, and the volume rendering data includes at least one of the following data: density data and temperature data corresponding to the model to be rendered; the shadow data is used for rendering the shadow of the model to be rendered, and the shadow data comprises at least one of the following data: density data and temperature data corresponding to the shadow of the model to be rendered.

Optionally, the obtaining module includes: the device comprises a first reading module and a first extracting module. The first reading module is used for reading animation data to be played in a game scene, wherein the animation data consists of a plurality of frames of texture images; the first extraction module is used for extracting texture data from each frame of texture image contained in the animation data.

Optionally, the rendering apparatus for a virtual model further includes: the first storage module is used for storing the texture data corresponding to each frame of texture image according to the display sequence of each frame of texture image after the texture data corresponding to the model to be rendered is obtained, so as to obtain preset files, wherein each preset file stores the texture data corresponding to the texture image of the current frame.

Optionally, the rendering apparatus for a virtual model further includes: the device comprises a second reading module, a conversion module, a second extraction module and a second storage module. The second reading module is used for reading texture data from the preset file corresponding to the texture image of the current frame after the texture data corresponding to each frame of texture image is stored according to the display sequence of each frame of texture image to obtain the preset file; the conversion module is used for converting the texture data into a binary file; the second extraction module is used for extracting density data and temperature data corresponding to the texture data from the binary file; and the second storage module is used for storing the density data into the first color channel corresponding to the texture data and storing the temperature data into the second color channel corresponding to the texture data to obtain the three-dimensional texture data corresponding to the texture data.

Optionally, the rendering apparatus for a virtual model further includes: the device comprises a first obtaining module and a first determining module. The first obtaining module is used for obtaining a rendering mark corresponding to a current rendering stage before rasterization processing is carried out on texture data to obtain volume rendering data corresponding to a model to be rendered; and the first determining module is used for determining the rendering algorithm corresponding to the current rendering stage according to the rendering mark.

Optionally, the first determining module includes: a second determination module and a third determination module. The second determining module is used for determining that the rendering algorithm corresponding to the current rendering stage is a rasterization algorithm when the rendering mark is determined to be the first mark, wherein the rasterization algorithm is used for rasterizing texture data, and the first mark represents that the current rendering stage performs rendering on the model to be rendered; and the third determining module is used for determining that the rendering algorithm corresponding to the current rendering stage is the ray tracing algorithm when the rendering mark is determined to be the second mark, wherein the ray tracing algorithm is used for carrying out ray tracing calculation on the model to be rendered, and the second mark represents the shadow of the model to be rendered in the current rendering stage.

Optionally, the processing module includes: the device comprises a second obtaining module, a sampling module, a first calculating module, a fourth determining module and a first rendering module. The second acquisition module is used for acquiring a line-of-sight path corresponding to the virtual camera in the game scene; the sampling module is used for sampling the line-of-sight path to obtain a plurality of viewpoints corresponding to the line-of-sight path; the first calculation module is used for calculating the distance between a light source and each viewpoint in a game scene; the fourth determining module is used for determining illumination data corresponding to each viewpoint according to the distance; and the first rendering module is used for performing coloring rendering on the model to be rendered according to the illumination data to obtain volume rendering data.

Optionally, the first rendering module includes: a second computation module, a third computation module, and a second rendering module. The second calculation module is used for performing accumulation operation on the illumination data in the sight line direction corresponding to the sight line path to obtain the target density corresponding to the model to be rendered; the third calculation module is used for performing accumulation operation on the illumination temperature in the illumination direction corresponding to the illumination path of the light source to obtain a target temperature corresponding to the model to be rendered; and the second rendering module is used for rendering the model to be rendered according to the target density and the target temperature to obtain volume rendering data.

Optionally, the calculation module includes: a fifth determination module, a sixth determination module, and a third rendering module. The fifth determining module is used for determining a light and shadow area corresponding to the model to be rendered according to the illumination direction; a sixth determining module, configured to determine density data corresponding to the shadow area; and the third rendering module is used for rendering the shadow area according to the density data corresponding to the shadow area to obtain the shadow data corresponding to the model to be rendered.

Optionally, the fifth determining module includes: a seventh determining module and an eighth determining module. The seventh determining module is used for determining a projection pixel in the model to be rendered and a position coordinate corresponding to the projection pixel according to the illumination path; and the eighth determining module is used for determining the initial position of the shadow area according to the position coordinates.

Optionally, the seventh determining module includes: the device comprises a third obtaining module, a ninth determining module, a sampling module, a tenth determining module and an eleventh determining module. The third acquisition module is used for acquiring the distance field corresponding to the light source; a ninth determining module for determining a target distance from the distance field that is closest to the model to be rendered; the sampling module is used for sampling the illumination path to obtain a plurality of illumination points; the tenth determining module is used for determining a target illumination point from the multiple illumination points according to the illumination direction corresponding to the light source and the target distance; and the eleventh determining module is used for determining the position of the target illumination point on the model to be rendered as a projection pixel when the distance between the target illumination point and the model to be rendered is smaller than a preset value.

Optionally, the sixth determining module includes: and the fourth calculation module is used for accumulating the pixel values corresponding to the light and shadow areas in the illumination direction to obtain density data corresponding to the light and shadow areas, wherein the density data represents transparency information of the light and shadow areas.

Optionally, the rendering apparatus for a virtual model further includes: an adjustment module and a fourth rendering module. The adjusting module is used for adjusting the volume rendering data based on a preset playing component after rasterization processing is carried out on the texture data to obtain the volume rendering data corresponding to the model to be rendered, so as to obtain the adjusted volume rendering data; and the fourth rendering module is used for rendering the model to be rendered based on the adjusted volume rendering data and displaying the color information corresponding to the rendered model to be rendered.

Optionally, the playing component is provided with a first parameter, a second parameter and a third parameter, wherein the first parameter is used for specifying volume rendering data, the second parameter is used for specifying material data for rendering the model to be rendered, the third parameter is used for specifying other attribute data, and the other attribute data is used for playing the volume rendering data and the shadow data.

According to another aspect of the embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned rendering method of the virtual model when running.

According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a rendering method for executing a program, wherein the program is arranged to perform the above-described rendering method of a virtual model when executed.

The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.

In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.

The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

26页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:游戏中虚拟角色的控制方法、装置、电子设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类