Virtual scene display method, device, equipment and storage medium

文档序号:1119085 发布日期:2020-10-02 浏览:20次 中文

阅读说明:本技术 虚拟场景显示方法、装置、设备以及存储介质 (Virtual scene display method, device, equipment and storage medium ) 是由 郭畅 于 2020-07-30 设计创作,主要内容包括:本申请公开了一种虚拟场景显示方法、装置、设备以及存储介质,属于计算机技术领域。通过本申请提供的技术方案,终端能够在待渲染元素所处显示层级的上层存在遮挡元素时,也即是用户遮挡元素对待渲染元素形成遮挡时,对确定出的贴图分辨率信息进行调整,降低待渲染元素对应贴图的分辨率。根据分辨率降低后的贴图,对待渲染元素进行渲染,能够降低终端显示虚拟场景时计算资源的消耗,提升终端运行的流畅度。(The application discloses a virtual scene display method, a virtual scene display device, virtual scene display equipment and a storage medium, and belongs to the technical field of computers. Through the technical scheme provided by the application, when the shielding element exists on the upper layer of the display level where the element to be rendered is located, namely the shielding element is shielded by the user, the determined resolution information of the map is adjusted, and the resolution of the corresponding map of the element to be rendered is reduced. And rendering the elements to be rendered according to the chartlet with the reduced resolution, so that the consumption of computing resources when the terminal displays the virtual scene can be reduced, and the smoothness of terminal operation is improved.)

1. A method for displaying a virtual scene, the method comprising:

determining first map resolution information of a map corresponding to an element to be rendered according to a distance between the element to be rendered in a virtual scene and a virtual camera in the virtual scene, wherein the first map resolution information is used for representing a map resolution corresponding to the distance, and the distance is in negative correlation with the map resolution;

responding to the existence of an occlusion element on the upper layer of the display level of the element to be rendered, and determining second map resolution information of a map corresponding to the element to be rendered, wherein the map resolution represented by the first map resolution information is higher than the map resolution represented by the second map resolution information;

and displaying the virtual scene based on the map of the second map resolution information and the element to be rendered.

2. The method of claim 1, wherein the determining second map resolution information of the map corresponding to the element to be rendered in response to an occlusion element existing at an upper level of a display level at which the element to be rendered is located comprises:

and responding to the existence of an occlusion element on the upper layer of the display level where the element to be rendered is located, and determining second mapping resolution information of the mapping corresponding to the element to be rendered and the size information according to the size information of the occlusion element.

3. The method according to claim 2, wherein the determining, according to the size information of the occlusion element, second map resolution information that the map corresponding to the element to be rendered matches the size information comprises:

and determining the mapping resolution information of the mapping corresponding to the element to be rendered as the second mapping resolution information in response to the size information of the occlusion element being smaller than or equal to the first size information.

4. The method of claim 3, further comprising:

in response to that the size information of the occlusion element is larger than second size information, determining resolution information of a map corresponding to the element to be rendered as third map resolution information, wherein the third map resolution information is the map resolution information with the lowest map resolution, and the second size information is larger than or equal to the first size information;

rendering the elements to be rendered based on the map corresponding to the elements to be rendered of the third map resolution information to obtain the virtual scene.

5. The method of claim 1, wherein in response to an occlusion element being present in an upper level of a display level in which the element to be rendered is located, the method further comprises:

determining the type of the element to be rendered;

and determining the mapping resolution information of the mapping corresponding to the element to be rendered and the type matching according to the type of the element to be rendered.

6. The method according to claim 5, wherein the determining, according to the type of the pair of elements to be rendered, the map resolution information that the map corresponding to the element to be rendered matches the type comprises:

in response to that the type of the element to be rendered is a first type, determining mapping resolution information matched with the first type as fourth mapping resolution information, wherein the fourth resolution information is mapping resolution information with the highest mapping resolution, and the first type represents that the importance degree of the element to be rendered in the virtual scene meets a first target condition;

after determining, according to the type of the element to be rendered, the chartlet resolution information that the chartlet corresponding to the element to be rendered matches the type, the method further includes:

rendering the element to be rendered based on the map corresponding to the element to be rendered of the fourth map resolution information to obtain the virtual scene.

7. The method of claim 1, wherein after displaying the virtual scene, the method further comprises:

and in response to that the occlusion element does not exist on the upper layer of the display level where the element to be rendered is located, displaying the virtual scene based on the map corresponding to the element to be rendered and the element to be rendered of the first map resolution information.

8. The method of claim 1, wherein the displaying a virtual scene based on the map of the second map resolution information and the element to be rendered comprises:

rendering the elements to be rendered based on the map corresponding to the elements to be rendered of the second map resolution information to obtain the virtual scene.

9. The method of claim 1, wherein before determining the first map resolution information of the map corresponding to the element to be rendered according to the distance between the element to be rendered in the virtual scene and the virtual camera in the virtual scene, the method further comprises:

acquiring an initial map corresponding to the element to be rendered;

and reducing the mapping resolution of the initial mapping by using the target step size to obtain a plurality of mappings with different mapping resolutions, wherein one mapping with one mapping resolution corresponds to one resolution information.

10. The method of claim 9, wherein before decreasing the mapping resolution of the initial mapping by the target step size to obtain a plurality of mappings at different mapping resolutions, the method further comprises:

determining the type of the element to be rendered;

determining to perform resolution reduction processing on the initial map in response to the type of the element to be rendered belonging to a first type.

11. An apparatus for displaying a virtual scene, the apparatus comprising:

a first resolution information determining module, configured to determine, according to a distance between an element to be rendered in a virtual scene and a virtual camera in the virtual scene, first map resolution information of a map corresponding to the element to be rendered, where the first map resolution information is used to represent a map resolution corresponding to the distance, and the distance and the map resolution are in negative correlation;

a second resolution information determining module, configured to determine, in response to an occlusion element existing in an upper layer of a display level where the element to be rendered is located, second chartlet resolution information of a chartlet corresponding to the element to be rendered, where a chartlet resolution represented by the first chartlet resolution information is higher than a chartlet resolution represented by the second chartlet resolution information;

and the display module is used for displaying the virtual scene based on the mapping of the second mapping resolution information and the element to be rendered.

12. The apparatus according to claim 11, wherein the second resolution information determining module is configured to, in response to an occlusion element existing at an upper layer of a display hierarchy of the element to be rendered, determine, according to size information of the occlusion element, second map resolution information that a map corresponding to the element to be rendered matches the size information.

13. The apparatus of claim 12, wherein the second resolution information determining module is configured to determine, as the second tile resolution information, the tile resolution information of the tile corresponding to the element to be rendered in response to the size information of the occlusion element being less than or equal to the first size information.

14. A computer device, characterized in that the computer device comprises one or more processors and one or more memories, in which at least one program code is stored, which is loaded and executed by the one or more processors to implement the virtual scene display method according to any one of claims 1 to 10.

15. A computer-readable storage medium having at least one program code stored therein, the program code being loaded and executed by a processor to implement the virtual scene display method of any one of claim 1 to claim 10.

Technical Field

The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for displaying a virtual scene.

Background

With the development of multimedia technology and the improvement of computing power of terminals, the types of games that can be played on terminals are increasing, such as Turn-Based games (TBG), multiplayer online Battle sports games (MOBA), and Role-Playing games (RPG). When the terminal runs a game, elements to be rendered need to be rendered in real time, the elements to be rendered form a virtual scene and game characters in the game, and the virtual scene and the game characters can be displayed to a user by the terminal.

Disclosure of Invention

The embodiment of the application provides a virtual scene display method, a virtual scene display device and a virtual scene display storage medium, which can reduce consumption of computing resources during virtual scene display and improve smoothness of terminal operation. The technical scheme is as follows:

in one aspect, a method for displaying a virtual scene is provided, where the method includes:

determining first map resolution information of a map corresponding to an element to be rendered according to a distance between the element to be rendered in a virtual scene and a virtual camera in the virtual scene, wherein the first map resolution information is used for representing a map resolution corresponding to the distance, and the distance is in negative correlation with the map resolution;

responding to the existence of an occlusion element on the upper layer of the display level of the element to be rendered, and determining second map resolution information of a map corresponding to the element to be rendered, wherein the map resolution represented by the first map resolution information is higher than the map resolution represented by the second map resolution information;

and displaying the virtual scene based on the map of the second map resolution information and the element to be rendered.

In one aspect, a virtual scene display apparatus is provided, the apparatus including:

a first resolution information determining module, configured to determine, according to a distance between an element to be rendered in a virtual scene and a virtual camera in the virtual scene, first map resolution information of a map corresponding to the element to be rendered, where the first map resolution information is used to represent a map resolution corresponding to the distance, and the distance and the map resolution are in negative correlation;

a second resolution information determining module, configured to determine, in response to an occlusion element existing in an upper layer of a display level where the element to be rendered is located, second chartlet resolution information of a chartlet corresponding to the element to be rendered, where a chartlet resolution represented by the first chartlet resolution information is higher than a chartlet resolution represented by the second chartlet resolution information;

and the display module is used for displaying the virtual scene based on the mapping of the second mapping resolution information and the element to be rendered.

In a possible embodiment, the apparatus further comprises:

a third resolution information determining module, configured to determine, in response to that size information of the occlusion element is larger than second size information, resolution information of a map corresponding to the element to be rendered as third map resolution information, where the third map resolution information is map resolution information with a lowest map resolution, and the second size information is larger than or equal to the first size information;

the display module is further configured to render the element to be rendered based on the map corresponding to the element to be rendered of the third map resolution information, so as to obtain the virtual scene.

In a possible embodiment, the apparatus further comprises:

a type determination module for determining the type of the element to be rendered;

and the fourth map resolution information determining module is used for determining the map corresponding to the element to be rendered and the map resolution information matched with the type according to the type of the element to be rendered.

In a possible implementation manner, the fourth map resolution information determining module is configured to determine, in response to that the type of the element to be rendered is a first type, map resolution information that matches the first type as fourth map resolution information, where the fourth resolution information is map resolution information with a highest map resolution, and the first type indicates that an importance degree of the element to be rendered in the virtual scene meets a first target condition;

the display module is further configured to render the element to be rendered based on the map corresponding to the element to be rendered of the fourth map resolution information, so as to obtain the virtual scene.

In a possible implementation manner, the display module is further configured to, in response to that the occlusion element no longer exists at an upper layer of a display level where the element to be rendered is located, display the virtual scene based on the map corresponding to the element to be rendered and the element to be rendered of the first map resolution information.

In a possible implementation manner, the display module is further configured to render the element to be rendered based on the map corresponding to the element to be rendered of the second map resolution information, so as to obtain the virtual scene.

In a possible embodiment, the apparatus further comprises:

the map obtaining module is used for obtaining an initial map corresponding to the element to be rendered;

and the resolution reduction module is used for reducing the mapping resolution of the initial mapping by using the target step length to obtain a plurality of mappings with different mapping resolutions, and one mapping with one mapping resolution corresponds to one resolution information.

In a possible embodiment, the apparatus further comprises:

a processing module for determining a type of the element to be rendered; determining to perform resolution reduction processing on the initial map in response to the type of the element to be rendered belonging to a first type.

In one aspect, a computer device is provided that includes one or more processors and one or more memories having at least one program code stored therein, the program code being loaded and executed by the one or more processors to implement the operations performed by the virtual scene display method.

In one aspect, a computer-readable storage medium having at least one program code stored therein is provided, the program code being loaded and executed by a processor to implement the operations performed by the virtual scene display method.

In one aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer-readable storage medium, the computer program code being read by a processor of a computer device from the computer-readable storage medium, the computer program code being executed by the processor to cause the computer device to perform the virtual scene display method provided in the various alternative implementations described above.

Through the technical scheme provided by the application, when the shielding element exists on the upper layer of the display level where the element to be rendered is located, namely the shielding element is shielded by the user, the determined resolution information of the map is adjusted, and the resolution of the corresponding map of the element to be rendered is reduced. And rendering the elements to be rendered according to the chartlet with the reduced resolution, so that the consumption of computing resources when the terminal displays the virtual scene can be reduced, and the smoothness of terminal operation is improved.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a schematic diagram of an implementation environment of a virtual scene display method provided in an embodiment of the present application;

FIG. 2 is a schematic view of an interface provided by an embodiment of the present application;

FIG. 3 is a schematic view of an interface provided by an embodiment of the present application;

FIG. 4 is a schematic view of an interface provided by an embodiment of the present application;

FIG. 5 is a schematic view of an interface provided by an embodiment of the present application;

FIG. 6 is a schematic view of an interface provided by an embodiment of the present application;

fig. 7 is a flowchart of a virtual scene display method according to an embodiment of the present application;

fig. 8 is a flowchart of a virtual scene display method according to an embodiment of the present application;

FIG. 9 is a schematic view of an interface provided by an embodiment of the present application;

fig. 10 is a flowchart of a virtual scene display method according to an embodiment of the present application;

fig. 11 is a schematic structural diagram of a virtual scene display apparatus according to an embodiment of the present application;

fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application;

fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application.

Detailed Description

To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.

The term "at least one" in this application refers to one or more, "a plurality" means two or more, for example, a plurality of tiles refers to two or more tiles.

Virtual space: is a space constructed for designing a virtual scene, and may also be referred to as a game space. The virtual space has its own coordinate system, which is composed of two perpendicular x, y and z axes, and each virtual object in the virtual scene has a unique coordinate value in the coordinate system, which may also be referred to as a world coordinate system.

And elements to be rendered: it may also be referred to as a model to be rendered, which is a model designed by a technician to have a certain shape and volume. The model to be rendered that is not rendered is generally a pure color, such as white or gray. The model to be rendered can be a simulation of an object in the real world, for example, the element to be rendered can be a building model, an animal model, or the like.

Mapping: the model to be rendered may also be referred to as Texture Mapping (Texture Mapping) or Texture, and when rendering the model to be rendered, a technician can select different maps by himself or herself to render the element to be rendered, thereby obtaining different rendering effects. For example, if the model to be rendered is a building, the technician can render the building red using the map a, can render the building blue using the map B, and can render the building other colors using other maps.

Virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.

A virtual camera: the game world is a virtual device for capturing and displaying a game world for a player, an image seen by the player through a screen is an image shot by a virtual camera, and at least one virtual camera exists in a virtual space. When a virtual camera exists in the virtual space, the player can observe a virtual scene through an angle; when a plurality of cameras exist in the virtual space, the player can switch the angle of view from which the virtual scene is viewed by different operations.

And (3) displaying the hierarchy: the terminal can be divided into a plurality of display levels to display when displaying the virtual scene, for example, when the terminal displays the virtual scene, a game character is displayed on a first display level, and after a player controls the game character to complete a certain virtual event, the terminal can display a completed interface on the last display level of the first display level, and the interface can cover the game character.

Round-based game: the main characteristic is that the fighting process in the game is not instant, but both parties of the fighting can only execute actions in their own rounds and can not execute actions in the rounds of the other parties. For example, if the current round is the operation round of one party, the user can control the virtual object to perform actions such as launching skill, using "common attack" or using props; if the current round is the operation round of the enemy, the user cannot control the execution of the action and can only watch the enemy virtual object to execute different actions.

Virtual object: refers to a movable object in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.

Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in a virtual scene battle by training, or a Non-user Character (NPC) set in a virtual scene interaction. Alternatively, the virtual object may be a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.

Fig. 1 is a schematic diagram of an implementation environment of a virtual scene display method according to an embodiment of the present application, and referring to fig. 1, the implementation environment may include a terminal 110 and a server 140.

The terminal 110 is connected to the server 140 through a wireless network or a wired network. Optionally, the terminal 110 is a device such as a smart phone, a tablet computer, a smart television, a desktop computer, a vehicle computer, and a portable computer. The terminal 110 is installed and operated with an application program supporting the display of a virtual scene.

Optionally, the server 140 is an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Delivery Network (CDN), a big data and artificial intelligence platform, and the like.

Optionally, the terminal 110 generally refers to one of a plurality of terminals, and the embodiment of the present application is illustrated by the terminal 110.

Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminal may be only one, or several tens or hundreds, or more, and in this case, other terminals are also included in the implementation environment. The number and the type of the terminals are not limited in the embodiments of the present application.

In the embodiment of the present application, the server or the terminal may be used as an execution subject to implement the technical solution provided in the embodiment of the present application, or the technical method provided in the present application may be implemented through interaction between the terminal and the server, which is not limited in the embodiment of the present application. The following description will take the execution subject as an example:

in order to more clearly explain the technical solution provided by the present application, first, a case that different types of games may trigger the technical solution provided by the present application during the running process is explained:

taking turn-based games as an example, a large number of NPCs exist in the turn-based games, and a user can control the virtual object to interact with the NPCs. When the user controls the virtual object to interact with the NPC, an interface 200 shown in fig. 2 pops up on the screen, the interface 200 is composed of an NPC avatar 201, an NPC name 202 and interactive contents 203, and the user can know the information through the interface 200, so that the virtual object is controlled to execute different game tasks or interact with the NPC by selecting different conversations.

In the turn-based game, besides the interface pops up on the screen when the user controls the virtual object to interact with the NPC, when the turn-based game enters a battle scene, the interface 300 shown in fig. 3 is popped up, and the interface 300 is used for prompting the user to start the battle and to attribute to the current turn.

Taking an MOBA game as an example, when a user purchases a virtual item, the terminal pops up an interface 400 as shown in fig. 4, and the user can view the worn virtual item through the interface 400, select different types of virtual items and purchase a required virtual item, and can sell the worn item. The user can also view the match information of both the players, such as the number of the players who beat the enemies, the number of times of the players who beat the enemies, and the number of times of the attacks or the attack, through the interface 500 shown in fig. 5. In addition, the user can also view an introduction of different virtual skills of the own controlled virtual object through the interface 600 as shown in FIG. 6.

For RPG games, the logic of interface pop-up is similar to the turn-based game, and is not described herein.

For the turn-based game, the MOBA-type game or the RPG game, the popped-up interface 200, the interface 300, the interface 400, the interface 500 and the interface 600 may generate a certain occlusion to the originally displayed virtual scene, and meanwhile, the attention of the user may be focused on the popped-up interface, and at this time, the terminal may reduce the consumption of the computing resources by executing the technical scheme provided by the present application.

It should be noted that, in the following description of the technical solutions provided in the present application, a terminal is taken as an example of an execution subject. In other possible embodiments, the server may also be used as an execution subject to execute the technical solution provided in the present application, and the embodiment of the present application is not limited to the type of the execution subject.

Fig. 7 is a flowchart of a virtual scene display method provided in an embodiment of the present application, and referring to fig. 7, the method includes:

701. the terminal determines first map resolution information of a map corresponding to an element to be rendered according to the distance between the element to be rendered in the virtual scene and a virtual camera in the virtual scene, wherein the first map resolution information is used for representing the map resolution corresponding to the distance, and the distance and the map resolution are in negative correlation.

Optionally, the distance between the element to be rendered and the virtual camera in the virtual scene may be determined by the terminal according to coordinates of the element to be rendered and the virtual camera in the virtual scene, where the coordinates in the virtual scene are also coordinates in the virtual space, and may also be referred to as world coordinates.

702. And in response to the existence of the shielding element on the upper layer of the display level where the element to be rendered is located, the terminal determines second mapping resolution information of the mapping corresponding to the element to be rendered.

Optionally, the occlusion element is a display element capable of generating an occlusion effect on a lower display level, for example, a game character a is displayed on the display level a, in the process of game operation, a letter c exists on an upper layer of the display level a, and since the display of the letter c can generate an occlusion effect on the display of the game character a, the letter c is also an occlusion element.

703. And the terminal displays the virtual scene based on the mapping of the second mapping resolution information and the element to be rendered.

Through the technical scheme provided by the application, when the shielding element exists on the upper layer of the display level where the element to be rendered is located, namely the shielding element shields the element to be rendered, the terminal adjusts the determined resolution information of the map, and reduces the resolution of the corresponding map of the element to be rendered. And rendering the elements to be rendered according to the chartlet with the reduced resolution, so that the consumption of computing resources when the terminal displays the virtual scene can be reduced, and the smoothness of terminal operation is improved.

It should be noted that there are multiple elements to be rendered in the virtual scene, and the position of each element to be rendered in the virtual scene may be different. Fig. 8 is a flowchart of a virtual scene display method provided in an embodiment of the present application, and referring to fig. 8, the method includes:

801. and the terminal acquires an initial map corresponding to the element to be rendered.

The initial map can be from the internet, an initial map designed for an element to be rendered by a technician, and a map synthesized by a terminal.

In a possible implementation manner, the terminal can obtain the initial map corresponding to the identifier of the element to be rendered from the storage space according to the identifier of the element to be rendered.

802. And the terminal reduces the mapping resolution of the initial mapping by the target step length to obtain a plurality of mappings with different mapping resolutions, wherein one mapping with one mapping resolution corresponds to one resolution information.

Optionally, the target step is to reduce the ratio of the resolution of the map, for example, the terminal can set the target step to 0.5, which indicates that the terminal can reduce the resolution of the map to 1/2 in one processing procedure.

In a possible implementation manner, the terminal can perform down-sampling on the initial map by the target step size, so as to reduce the map resolution of the initial map, and obtain a plurality of maps with the map resolution, where the down-sampling is to reduce the number of sampling points for sampling.

In this embodiment, the terminal can obtain multiple maps with different map resolutions based on one initial map, and since the multiple maps with different map resolutions are obtained by down-sampling the same initial map, the image contents of the multiple maps with different map resolutions are the same although the resolutions are different, and the terminal can subsequently select the maps with different resolutions according to different scenes to perform image rendering, thereby improving the display effect of the virtual scene.

Downsampling the initial map can be achieved in either of two ways:

in the method 1, taking the target step size of 0.5 as an example, after the terminal down-samples the initial map with the map resolution of 512 × 512, the map with the map resolution of 256 × 256 can be obtained. The terminal then down-samples the map with a map resolution of 256 × 256, and can obtain a map with a map resolution of 128 × 128. The terminal then down-samples the map with the map resolution of 128 × 128 to obtain a map with the map resolution of 64 × 64, and so on to obtain a plurality of maps with different map resolutions, wherein the number of sampling points in each down-sampling is 1/4 of the previous down-sampling.

In the method 2, or taking the target step size of 0.5 as an example, after the terminal down-samples the initial map with the map resolution of 512 × 512 once, the map with the map resolution of 256 × 256 can be obtained. In the second down-sampling process, the terminal can continue to down-sample the initial map with the map resolution of 512 × 512, the number of sampling points of the second down-sampling is 1/4 of the first down-sampling, the map with the map resolution of 128 × 128 can be obtained, and the like, so that a plurality of maps with different map resolutions are obtained.

It should be noted that the terminal can perform downsampling on the initial map in any one of the two manners, which is not limited in the embodiment of the present application.

The following describes a method for downsampling a terminal map:

for example, the resolution of a map is 512 × 512, which means that the map includes 512 rows composed of pixels and 512 columns composed of pixels. If the terminal down-samples the chartlet in the proportion of 1/2, then for the row consisting of the pixels, the terminal can sample one pixel at every interval, that is, the row originally consisting of 512 pixels is changed into the row consisting of 256 pixels, and similarly, for the column consisting of the pixels, the terminal can also sample one pixel at every interval, that is, the column originally consisting of 512 pixels is changed into the column consisting of 256 pixels. The final realization is to downsample the map with the map resolution of 512 × 512 into the map with the map resolution of 256 × 256.

Of course, the above description of downsampling the map is only performed for convenience of understanding, and optionally, the terminal may obtain multiple maps with different map resolutions by using different downsampling methods, for example, the terminal may downsample the initial map by using Wavelet Compression (Wavelet Compression) and may downsample the initial map by using Discrete Cosine Transform (DCT), and the method of downsampling the initial map in this embodiment of the present application is not limited.

Optionally, after obtaining the maps with the multiple map resolutions, the terminal may perform antialiasing processing on the maps with the multiple map resolutions, so that the edges of the maps are smoother and closer to the object. After the anti-aliasing processing is carried out on the map, the terminal carries out rendering based on the processed map subsequently, and a more real rendering effect can be obtained.

In one possible embodiment, the terminal determines the type of element to be rendered. And responding to the type of the element to be rendered belonging to a first type, and determining that the processing of reducing the mapping resolution is not carried out on the initial mapping by the terminal, wherein the first type represents that the importance degree of the element to be rendered in the virtual scene meets a first target condition.

The importance degree of the element to be rendered and the first target condition can be designed by a technician according to the actual situation of the game, for example, the technician can set the importance degree of the element to be rendered corresponding to the game character to 9, set the importance degree of the element to be rendered corresponding to the virtual tree to 3, and set the importance degree of the element to be rendered corresponding to the virtual tree to be greater than 6 according to the first target condition. Then in response to the type of the element to be rendered being a game character, the terminal is able to determine that the degree of importance of the game character meets a first target condition; in response to the type of the element to be rendered being a virtual tree, the terminal can determine that the importance degree of the virtual tree does not meet the first target condition.

In this embodiment, since generating the maps with multiple map resolutions requires more storage space, the terminal can determine whether to perform the processing of reducing the map resolution on the initial map corresponding to the element to be rendered according to the type of the element to be rendered. For some more important elements to be rendered, the terminal can render the elements to be rendered by using the initial map all the time in the following instead of performing the processing of reducing the resolution of the map on the corresponding initial map, so that the rendering effect of the elements to be rendered is better. For some elements to be rendered with lower importance, the terminal can perform resolution reduction processing on the corresponding initial map, and select images with different resolutions to render the elements to be rendered in the subsequent rendering process, so that the computing resources of the terminal are saved.

For example, the terminal can determine whether the element to be rendered belongs to the first type according to the type indicated by the identifier of the element to be rendered. In response to that the element to be rendered belongs to the first type, the terminal can perform resolution reduction processing on the initial map corresponding to the element to be rendered, that is, in the subsequent image rendering process, the terminal can select maps with different resolutions according to different scenes to render the element to be rendered of the first type. And responding to that the element to be rendered does not belong to the first type, and the terminal does not perform resolution reduction processing on the initial map corresponding to the element to be rendered, namely in the subsequent image rendering process, the terminal always uses the initial map to render the element to be rendered, so that a better image rendering effect is obtained.

It should be noted that, the above steps 801 and 802 are optional steps, the terminal can execute the above steps 801 and 802 before executing the step 803, and also can execute the above steps 801 and 802 in advance, store the obtained maps with different map resolutions in a storage space in advance, when the terminal needs to render an element to be rendered, the terminal can directly obtain the maps with different map resolutions from the storage space, in this case, when the terminal displays a virtual scene, the terminal does not need to execute the above steps 801 and 802 again, and directly execute the step 803.

803. The terminal determines first map resolution information of a map corresponding to an element to be rendered according to the distance between the element to be rendered in the virtual scene and a virtual camera in the virtual scene, wherein the first map resolution information is used for representing the map resolution corresponding to the distance, and the distance and the map resolution are in negative correlation.

In one possible implementation, the terminal can determine the distance between the element to be rendered in the virtual scene and the virtual camera in the virtual scene according to the coordinates of the element to be rendered in the virtual scene and the coordinates of the virtual camera in the virtual scene. The terminal can query resolution information corresponding to the distance according to the distance between the element to be rendered in the virtual scene and the virtual camera, namely the first chartlet resolution information of the chartlet corresponding to the element to be rendered. The more the distance is inversely related to the resolution of the map, i.e. the longer the distance between the element to be rendered in the virtual scene and the virtual camera is, the lower the resolution of the map represented by the first map resolution information is. The closer the distance between the element to be rendered in the virtual scene and the virtual camera is, the higher the mapping resolution represented by the first mapping resolution information is.

In this embodiment, the terminal can determine the first map resolution information of the map corresponding to the element to be rendered according to the distance between the element to be rendered in the virtual scene and the virtual camera, that is, the terminal can determine the resolution of the map rendering the element to be rendered according to the distance between the element to be rendered in the virtual scene and the virtual camera. For the element to be rendered which is close to the virtual camera, the terminal can determine a higher chartlet resolution ratio for the element to be rendered, so that the rendering effect is improved. For the elements to be rendered which are far away from the virtual camera, the terminal can determine a lower chartlet resolution for the elements to be rendered, so that the consumption of computing resources in the subsequent rendering process is reduced.

For example, the terminal can use the tile level to represent the tile resolution information, that is, the tile level to represent the resolution of the tile, such as the first level of the tile level representing the initial tile corresponding to the element to be rendered, that is, the highest resolution of the tile, the second level of the tile level representing the first tile of 1/2 with the tile resolution being the initial tile corresponding to the element to be rendered, the third level of the tile level representing the second tile with the tile resolution being the first tile 1/2, and so on. The terminal determines the distance 5 between the element to be rendered and the virtual camera in the virtual scene according to the coordinates (1, 2, 3) of the element to be rendered in the virtual scene and the coordinates (1, 5, 7) of the virtual camera in the virtual scene. The terminal can determine a mapping level, such as 3, corresponding to the distance 5.

A method for determining, by a terminal, mapping resolution information corresponding to a distance between an element to be rendered in a virtual scene and a virtual camera according to the distance is described below.

In a possible embodiment, before performing step 803, the terminal is able to determine a maximum distance and a minimum distance between the element to be rendered in the virtual scene and the virtual camera in the virtual scene. The terminal determines a distance interval where the distance between the element to be rendered in the virtual scene and the virtual camera in the virtual scene is located according to the maximum distance and the minimum distance between the element to be rendered in the virtual scene and the virtual camera in the virtual scene. The terminal divides the distance interval into a plurality of sub-intervals according to the number of the mapping grades of the mapping corresponding to the element to be rendered, the number of the sub-intervals is the same as the number of mapping resolution information of the mapping corresponding to the rendering element, each sub-interval corresponds to one mapping resolution information, and the terminal can establish the corresponding relation between the sub-intervals and the mapping resolution information. In response to the terminal executing step 803, the terminal can determine the charting resolution information of the charting corresponding to the element to be rendered according to the distance between the element to be rendered and the virtual camera in the virtual scene and the corresponding relationship between the subintervals and the charting levels.

Taking the example that the terminal represents the map resolution information by using the map level, before executing step 803, the terminal has a maximum distance of 8 and a minimum distance of 1 according to the maximum distance between the element to be rendered in the virtual scene and the virtual camera. And the terminal determines that the distance interval between the element to be rendered in the virtual scene and the virtual camera is [1, 8] according to the maximum distance 8 and the minimum distance 1. If the mapping corresponding to the element to be rendered includes 8 mapping levels, the terminal can divide the distance interval [1, 8] into 8 sub-intervals [1, 2), [2, 3), [3, 4), [4, 5), [5, 6), [6, 7) and [7, 8], 8 sub-intervals [1, 2), [2, 3), [3, 4), [4, 5), [5, 6), [6, 7) and [7, 8] respectively corresponding to the mapping levels one to eight levels, that is, 8 sub-intervals [1, 2), [2, 3), [3, 4), [4, 5), [5, 6), [6, 7) and [7, 8] corresponding to the mapping levels decrease in resolution in sequence. The terminal can determine the distance between the element to be rendered in the virtual scene and the virtual camera, for example, 5, which falls into [5, 6 ] of the eight sub-intervals, and the terminal can determine the mapping level corresponding to the sub-interval [5, 6) as the mapping level corresponding to the element to be rendered.

804. And in response to the existence of the shielding element on the upper layer of the display level where the element to be rendered is located, the terminal determines second mapping resolution information of the mapping corresponding to the element to be rendered, wherein the mapping resolution represented by the first mapping resolution information is higher than that represented by the second mapping resolution information.

In a possible implementation manner, in response to the existence of a shielding element in the upper layer of the display level where the element to be rendered is located, the terminal determines second chartlet resolution information, according to the size information of the shielding element, of the chartlet corresponding to the element to be rendered, which is matched with the size information.

In this embodiment, the terminal can determine the second mapping resolution information of the mapping corresponding to the element to be rendered according to the size information of the shielding element, so that when the shielding elements with different sizes are displayed on the upper layer of the display level where the element to be rendered is located, different second mapping resolution information can be determined, individuation in determining the second mapping resolution information is realized, and the effect of image rendering on the subsequent element to be rendered is improved.

The above embodiment will be described in three cases, and in response to the size information of the occlusion element being smaller than or equal to the first size information, the terminal can determine second map resolution information of a map corresponding to the element to be rendered by following case 1; in response to the size information of the occlusion element being larger than the second size information, the terminal does not determine the second tile resolution information of the tile corresponding to the element to be rendered any more, but determines the third tile resolution information of the tile corresponding to the element to be rendered by the following case 2, where the second size information is larger than or equal to the first size information. In case 2, the second size information is equal to the first size information, and in case 3, the second size information is greater than the first size information.

In case 1, in response to that the size information of the occlusion element is less than or equal to the first size information, the terminal can determine the map resolution information of the map corresponding to the element to be rendered as the second map resolution information.

Under the condition, when the shielding element with a smaller area exists on the upper layer of the display level where the element to be rendered is located, the terminal can reduce the resolution of the map corresponding to the element to be displayed, and the map with the lower resolution is adopted to render the element to be rendered, so that the consumption of computing resources when the terminal renders the element to be rendered is reduced.

For example, the terminal uses the chartlet level to represent chartlet resolution information, the first chartlet level of the chartlet corresponding to the element to be rendered is three levels, the terminal determines that the size information of the shielding element is 3 square inches, and the first size information is 4 square inches, so that the terminal can determine that the size information of the shielding element is smaller than the first size information. The terminal can determine the map level of the map corresponding to the element to be rendered as a second map level, for example, five levels, where the second map level can be obtained by adding the first level to the terminal on the basis of the first map level, that is, three levels plus two levels, where two levels are the first level. Of course, the second mapping level can also be directly determined by the terminal, which is not limited in the embodiment of the present application.

And 2, the second size information is equal to the first size information, and in response to the fact that the size information of the shielding element is larger than the second size information, the terminal can determine the chartlet resolution information of the chartlet corresponding to the element to be rendered as third chartlet resolution information, wherein the third chartlet resolution information is the chartlet resolution information with the lowest chartlet resolution.

In this case, when there is an occlusion element with a large area on the upper layer of the display level where the element to be rendered is located, the part of the rendered element to be rendered is occluded by the occlusion element, and the user cannot see the rendered element to be rendered. The terminal can directly reduce the resolution of the chartlet corresponding to the element to be displayed to the minimum, and the chartlet with the lowest resolution is adopted to render the element to be rendered, so that the consumption of computing resources when the terminal renders the element to be rendered is reduced.

For example, the terminal uses a map level to represent map resolution information, the map level of the map corresponding to the element to be rendered is from one level to nine levels, the resolution corresponding to one level is the highest, and the resolution corresponding to the nine levels is the lowest, the terminal determines that the size information of the shielding element is 6 square inches, and the second size information is 5 square inches, so that the terminal can determine that the size information of the shielding element is larger than the second size information. The terminal can determine the map level of the map corresponding to the element to be rendered as a third map level, namely nine levels.

Case 3, the second size information is larger than the first size information, and in response to that the size information of the occlusion element is smaller than or equal to the first size information, the terminal can determine the map resolution information of the map corresponding to the element to be rendered as the second map resolution information in the manner shown in case 1; in response to that the size information of the occlusion element is greater than or equal to the second size information, the terminal can determine the map resolution information of the map corresponding to the element to be rendered as the third map resolution information in the manner shown in case 2. In response to that the size information of the occlusion element is larger than the first size information and smaller than the second size information, the terminal can determine the map resolution information of the map corresponding to the element to be rendered as fifth map resolution information, wherein the fifth map resolution information is resolution information between the second resolution information and the third resolution information.

Of course, after the terminal determines the second map resolution information or the third map resolution information of the map corresponding to the element to be rendered according to the size information of the occlusion element, the terminal can also determine the fourth map resolution information of the map corresponding to the element to be rendered according to the type of the element to be rendered, and a method for determining the fourth map resolution information by the terminal according to the type of the element to be rendered is described below:

in a possible implementation manner, the terminal can determine the type of the element to be rendered, and determine the chartlet resolution information of the chartlet corresponding to the element to be rendered and the type matching according to the type of the element to be rendered.

In the embodiment, the terminal can determine different chartlet resolution information for the elements to be rendered according to the types of the elements to be rendered, so that personalized rendering of the elements to be rendered of different types is realized, and the display effect of the virtual scene is improved.

For example, in response to that the type of the element to be rendered is the first type, the terminal determines the map resolution information matched with the first type as fourth map resolution information, the fourth resolution information is the map resolution information with the highest map resolution, and the first type indicates that the importance degree of the element to be rendered in the virtual scene meets the first target condition. That is to say, the terminal can determine the importance degree of the element to be rendered in the virtual scene according to the type of the element to be rendered, and determine that the importance degree meets the first target condition, or the resolution of the map corresponding to the element to be rendered with higher importance is the highest map resolution, so that the element to be rendered with higher importance can be always rendered by using the map with the highest resolution, and the display effect of the element to be rendered with higher importance after rendering is improved.

In response to that the type of the element to be rendered is not the first type, the terminal can determine the map resolution information of the map corresponding to the element to be rendered according to the size of the occlusion element, and the principle and the previous description belong to the same inventive concept, which is not described herein again.

805. And the terminal displays the virtual scene based on the mapping of the second mapping resolution information and the element to be rendered.

In a possible implementation manner, in response to that the size information of the occlusion element is smaller than or equal to the first size information, the terminal can render the element to be rendered based on the map of the second map resolution information to obtain the virtual scene.

In a possible implementation manner, in response to that the size information of the occlusion element is larger than the second size information, the terminal can render the element to be rendered based on the map of the third map resolution information to obtain the virtual scene.

In a possible implementation manner, in response to that the second size information is larger than the first size information and the size information of the occlusion element is larger than the first size information and smaller than the second size information, the terminal can render the element to be rendered based on the map of the fifth map resolution information to obtain the virtual scene.

In a possible implementation manner, in response to that the type of the element to be rendered is the first type, the terminal can render the element to be rendered based on the map of the fourth map resolution information to obtain the virtual scene.

Optionally, after the terminal performs step 805, it can also perform step 806 to re-render the element to be rendered.

806. And in response to that the shielding element does not exist on the upper layer of the display level where the element to be rendered is located, the terminal displays the virtual scene based on the map and the element to be rendered corresponding to the element to be rendered of the first map resolution information.

It should be noted that, the description of 801-.

The following describes the technical solutions provided in the above steps 801 and 806 by taking the game development as an example in which the technicians use Unity (game engine) as a development tool. In the following description, the term used in Unity will be taken as an example, and for the sake of easy understanding, the correspondence relationship between the term used in the above steps 801 and 806 and the term used in Unity will be described first.

The Mipmap in the following description process corresponds to the map in the above steps 801-.

The Unity includes a function, Mipmap (texture mapping technology), an initial map corresponding to an element to be rendered can be processed into multiple maps with different resolutions by Mipmap, the maps are called mipmaps, Mipmap uses Mipmap levels to represent the maps with different resolutions, see fig. 9, a technician can switch and display the maps with different Mipmap levels by dragging a slider 901 at the upper right corner in the Unity, Mipmap includes 10 levels from 0 level to 9 level, where 0 level represents the highest map resolution, i.e., the clearest map, 9 level represents the lowest map resolution, i.e., the most blurred map, and from 0 level to 9 level, the map resolutions are halved in sequence, e, e.g., 512 × 512, 256 × 256, 128 × 128, 64 × 64, 32 × 32, 16 × 16, 8 × 4, 2, and 1 × 1, and 902 with a Mipmap level of 5 is displayed.

The following describes a flow of the technical solution provided in the present application with reference to fig. 10.

The method comprises the following steps: first the technician needs to turn on Mipmap, then start Texture Streaming in Unity, after which the terminal can start loading the map. The terminal can calculate the Mipmap level according to the distance between the element to be rendered in the virtual scene and the virtual camera through the Unity. The terminal can determine the relationship between the Mipmap level calculated by the Unity and the Mipmap level configured in the current scene. And in response to the fact that the Mipmap level calculated by the Unity is lower than the Mipmap level configured in the current scene, the terminal loads the map by adopting the configured Mipmap level.

The method for the technician to set up in Unity is described below:

as shown in fig. 11, the technician can check the texture stream in a Quality Setting panel, where Max Level Reduction in the Quality Setting panel indicates that the Mipmap can be reduced by several levels at most, 7 at most, and 2 as a default. In addition to Setting in the Quality Setting panel, the technician can also set the Max Level Reduction from 2 to 7 by the following code 1.

Code 1

// Mipmap can be reduced to a level of at most 2 by default, and can also be set in the Quality Setting panel

Quality Settings.streaming Mipmap Max Level Reduction=7;

In addition, the meaning of some parameters in Unity is as follows:

calibrated Mipmap Level: indicating the distance between the current camera and the object, and calculating the Mipmap level to be loaded.

Cesired Mipmap Level: indicating the actual loaded Mipmap Level currently, since the load Level may be forced to be set, the actual loaded Mipmap Level may not be consistent with the called Mipmap Level.

Mipmap Bias: the offset indicating the loading of the Mipmap level, for example, originally loading a Mipmap of level 0, since Mipmap Bias is set to 2, the actually loading is level 2, and this value may also be set to a negative number, for example, originally needing to load a Mipmap of level 2, and since-1 is set, the actually loading a Mipmap of level 1.

Step two, when the player enters a mode of opening the full screen display UI (Uner interface), the Mipmap level of the model map behind the UI can be reduced to the lowest resolution level at this time, as shown in code 2.

Code 2

Figure BDA0002610147050000181

Step three: and when the UI disappears, the original Mipmap level is restored, and the map of the original Mipmap level is loaded.

It should be noted that, in step 2, it is described by taking an example of directly reducing the Mipmap level of the map to the lowest level, in fact, the terminal may not directly reduce the Mipmap level of the map to the lowest level, but a technician sets the Mipmap reduced level, for example, a function of observing the Mipmap level of the map in a Scene (Scene) is provided for a designer in Unity, referring to fig. 11, the technician may select a Mipmaps option from Scene options in Unity, Unity may display the Mipmap level in an interface, gray represents that the density of the texture is higher, the technician may manually reduce the Mipmap level of the map, white represents that the density of the texture is lower, and the technician may manually increase the Mipmap level of the map. The terminal can generate a configuration file based on the Mipmap level set by the technical staff, so that the Mipmap level of the map can be adjusted conveniently in the game running process.

Through the technical scheme provided by the application, when the shielding element exists on the upper layer of the display level where the element to be rendered is located, namely the shielding element is shielded by the user, the determined resolution information of the map is adjusted, and the resolution of the corresponding map of the element to be rendered is reduced. And rendering the elements to be rendered according to the chartlet with the reduced resolution, so that the consumption of computing resources when the terminal displays the virtual scene can be reduced, and the smoothness of terminal operation is improved.

Fig. 11 is a schematic structural diagram of a virtual scene display apparatus according to an embodiment of the present application, and referring to fig. 11, the apparatus includes: a first resolution information determination module 1101, a second resolution information determination module 1102, and a display module 1103.

The first resolution information determining module 1101 is configured to determine, according to a distance between an element to be rendered in the virtual scene and a virtual camera in the virtual scene, first map resolution information of a map corresponding to the element to be rendered, where the first map resolution information is used to represent a map resolution corresponding to the distance, and a negative correlation exists between the distance and the map resolution.

The second resolution information determining module 1102 is configured to determine, in response to an occlusion element existing in an upper layer of a display level where an element to be rendered is located, second chartlet resolution information of a chartlet corresponding to the element to be rendered, where a chartlet resolution represented by the first chartlet resolution information is higher than a chartlet resolution represented by the second chartlet resolution information.

A display module 1103, configured to display the virtual scene based on the map and the element to be rendered of the second map resolution information.

In one possible embodiment, the apparatus further comprises:

and the third resolution information determining module is used for determining the resolution information of the map corresponding to the element to be rendered as third map resolution information in response to the fact that the size information of the shielding element is larger than the second size information, wherein the third map resolution information is the map resolution information with the lowest map resolution, and the second size information is larger than or equal to the first size information.

And the display module is also used for rendering the elements to be rendered based on the mapping corresponding to the elements to be rendered of the third mapping resolution information to obtain the virtual scene.

In one possible embodiment, the apparatus further comprises:

and the type determining module is used for determining the type of the element to be rendered.

And the fourth map resolution information determining module is used for determining the map corresponding to the element to be rendered and the map resolution information matched with the type according to the type of the element to be rendered.

In a possible implementation manner, the fourth map resolution information determining module is configured to determine, in response to that the type of the element to be rendered is the first type, map resolution information that matches the first type as fourth map resolution information, where the fourth resolution information is map resolution information with the highest map resolution, and the first type indicates that the importance degree of the element to be rendered in the virtual scene meets the first target condition.

And the display module is further used for rendering the elements to be rendered based on the mapping corresponding to the elements to be rendered of the fourth mapping resolution information to obtain the virtual scene.

In a possible implementation manner, the display module is further configured to, in response to that no occlusion element exists on an upper layer of a display level where the element to be rendered is located, display the virtual scene based on the map corresponding to the element to be rendered and the element to be rendered of the first map resolution information.

In a possible implementation manner, the display module is further configured to render the element to be rendered based on the map corresponding to the element to be rendered of the second map resolution information, so as to obtain the virtual scene.

In one possible embodiment, the apparatus further comprises:

and the map obtaining module is used for obtaining an initial map corresponding to the element to be rendered.

And the resolution reduction module is used for reducing the mapping resolution of the initial mapping by using the target step length to obtain a plurality of mappings with different mapping resolutions, and one mapping with one mapping resolution corresponds to one resolution information.

In one possible embodiment, the apparatus further comprises:

and the processing module is used for determining the type of the element to be rendered. In response to the type of the element to be rendered belonging to the first type, determining to perform resolution reduction processing on the initial map.

It should be noted that: in the virtual scene display apparatus provided in the foregoing embodiment, when displaying a virtual scene, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the functions described above. In addition, the virtual scene display apparatus provided in the above embodiments and the virtual scene display method provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.

An embodiment of the present application provides a computer device, configured to perform the foregoing method, where the computer device may be implemented as a terminal or a server, and a structure of the terminal is introduced below:

fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application. Optionally, the terminal 1200 is: smart phones, tablet computers, smart televisions, desktop computers, vehicle computers, portable computers, and the like. Terminal 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.

In general, terminal 1200 includes: one or more processors 1201 and one or more memories 1202.

The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.

Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1202 is used to store at least one program code for execution by the processor 1201 to implement the virtual scene display method provided by the method embodiments herein.

In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, display 1205, camera assembly 1206, audio circuitry 1207, positioning assembly 1208, and power supply 1209.

The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.

The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth.

The display screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard.

Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal.

The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication.

The positioning component 1208 is used to locate a current geographic location of the terminal 1200 to implement navigation or LBS (location based Service).

The power supply 1209 is used to provide power to various components within the terminal 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable.

In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.

The acceleration sensor 1211 can detect magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200.

The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211.

Pressure sensors 1213 may be disposed on the side frames of terminal 1200 and/or underlying display 1205. When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of the display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1205.

The fingerprint sensor 1214 is used for collecting a fingerprint of the user, and the processor 1201 identifies the user according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user according to the collected fingerprint.

The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the display 1205 according to the ambient light intensity collected by the optical sensor 1215.

The proximity sensor 1216 is used to collect a distance between the user and the front surface of the terminal 1200.

Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.

The computer device may also be implemented as a server, and the following describes a structure of the server:

fig. 13 is a schematic structural diagram of a server 1300 according to an embodiment of the present application, where the server 1300 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors (CPUs) 1301 and one or more memories 1302, where at least one program code is stored in the one or more memories 1302, and the at least one program code is loaded and executed by the one or more processors 1301 to implement the methods provided by the foregoing method embodiments. Certainly, the server 1300 may further include components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 1300 may further include other components for implementing the functions of the device, which is not described herein again.

In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, including program code executable by a processor to perform the virtual scene display method in the above embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.

In an exemplary embodiment, a computer program product or a computer program is also provided, which includes computer program code stored in a computer-readable storage medium, which is read by a processor of a computer device from the computer-readable storage medium, and which is executed by the processor to cause the computer device to execute the virtual scene display method provided in the above-mentioned various alternative implementations.

It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by hardware associated with program code, and the program may be stored in a computer readable storage medium, where the above mentioned storage medium may be a read-only memory, a magnetic or optical disk, etc.

The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

28页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种射击类游戏的画面显示方法、装置及游戏终端

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类