Method and related device for managing model in virtual scene

文档序号:1207252 发布日期:2020-09-04 浏览:12次 中文

阅读说明:本技术 一种虚拟场景中模型管理的方法以及相关装置 (Method and related device for managing model in virtual scene ) 是由 姚丽 刘智洪 于 2020-05-27 设计创作,主要内容包括:本申请公开了一种虚拟场景中模型管理的方法以及相关装置。通过在虚拟场景中按照第一比例显示第一模型;然后响应于目标操作将第一模型更新为第二模型,其中第二模型的模型精度大于第一模型的模型精度,目标虚拟对象在第二视角下为第二比例显示,第二比例小于第一比例;进一步的基于第二比例在虚拟场景中显示第二模型。从而实现视角变换过程中的模型转换过程,由于第二模型的模型精度大于第一模型的模型精度,从而保证了视角切换后模型显示的清晰程度,提高了虚拟场景中模型的精确性。(The application discloses a method and a related device for managing a model in a virtual scene. Displaying a first model in a virtual scene according to a first scale; then, the first model is updated to a second model in response to the target operation, wherein the model precision of the second model is greater than that of the first model, the target virtual object is displayed at a second view angle in a second proportion, and the second proportion is smaller than the first proportion; the second model is further displayed in the virtual scene based on the second scale. Therefore, the model conversion process in the visual angle conversion process is realized, and the model precision of the second model is higher than that of the first model, so that the display definition of the model after the visual angle is switched is ensured, and the model accuracy in the virtual scene is improved.)

1. A method for model management in a virtual scene, comprising:

displaying a first model in a virtual scene according to a first scale, wherein the first model is a model under a first visual angle and is associated with a target virtual object;

updating the first model to a second model in response to a target operation, the second model being a model at a second perspective, the second model being associated with the target virtual object, the model accuracy of the second model being greater than the model accuracy of the first model, the target virtual object being displayed at a second scale at the second perspective, the second scale being less than the first scale;

displaying the second model in the virtual scene based on the second scale.

2. The method of claim 1, wherein updating the first model to a second model in response to a target operation comprises:

responding to the target operation to acquire a trigger instruction of a first virtual element;

updating the first model to the second model based on a triggering instruction of the first virtual element.

3. The method of claim 2, wherein the updating the first model to the second model based on the triggering instruction of the first virtual element comprises:

determining a first model component in the first model according to a triggering instruction of the first virtual element;

and adjusting the model precision of the first model part to obtain the second model.

4. The method of claim 1, wherein updating the first model to a second model in response to a target operation comprises:

acquiring a trigger instruction of a second virtual element in response to the target operation, wherein the second virtual element is associated with a target interaction mode;

updating the first model to the second model in the target interaction mode based on a triggering instruction of the second virtual element.

5. The method of claim 4, wherein the updating the first model to the second model in the target interaction mode based on the triggering instruction of the second virtual element comprises:

determining a second model component of the first model in the target interaction mode, the second model component being associated with the first model component;

and updating the second model component based on the model precision corresponding to the second model to obtain the second model.

6. The method of claim 5, wherein the updating the second model component based on the model accuracy corresponding to the second model to obtain the second model comprises:

updating the second model component based on the model precision corresponding to the second model to obtain the updated second model component;

updating a third model component corresponding to the updated second model component to obtain the second model, wherein the third model component and the second model component have different description dimensions for the virtual object, and the third model component is associated with the first model component.

7. The method of claim 4, wherein the triggering instruction for fetching the second virtual element in response to the target operation comprises:

responding to the target operation to acquire a trigger instruction of a third virtual element, wherein the third virtual element is used for indicating element interaction in the virtual scene;

and acquiring the trigger instruction of the second virtual element based on the trigger instruction of the third virtual element.

8. The method of claim 7, further comprising:

if the triggering of the third virtual element is stopped, updating the second model to the first model;

displaying the first model in the virtual scene according to the first scale.

9. The method according to any one of claims 1-8, further comprising:

a fourth model component that determines that the second model is not displayed in the virtual scene at the second scale;

hiding the fourth model part.

10. The method according to any one of claims 1-8, further comprising:

acquiring interaction information of the target virtual object in the virtual scene, wherein the interaction information is obtained based on at least one virtual object in the virtual scene;

and if the interactive information meets a preset condition, switching to the first visual angle for displaying.

11. The method according to any one of claims 1-8, further comprising:

acquiring a visual angle rule corresponding to the virtual scene;

indicating a switch of the first model and the second model according to the perspective rule.

12. The method of claim 1, wherein the virtual scene is a shooting game scene, the first perspective is a third-person perspective, and the second perspective is a first-person perspective.

13. An apparatus for model management, comprising:

the display unit is used for displaying a first model in a virtual scene according to a first proportion, wherein the first model is a model under a first visual angle, and the first model is associated with a target virtual object;

a management unit, configured to update the first model to a second model in response to a target operation, the second model being a model at a second view angle, the second model being associated with the target virtual object, the model accuracy of the second model being greater than the model accuracy of the first model, the target virtual object being displayed at a second scale at the second view angle, the second scale being smaller than the first scale;

the display unit is further configured to display the second model in the virtual scene based on the second scale.

14. A computer device, the computer device comprising a processor and a memory:

the memory is used for storing program codes; the processor is configured to perform the method of model management of any of claims 1 to 12 according to instructions in the program code.

15. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of model management of any of the preceding claims 1 to 12.

Technical Field

The present application relates to the field of computer technologies, and in particular, to a method and a related apparatus for managing a model in a virtual scene.

Background

With the development of internet technology, more and more games appear in the life of people, and the experience demand of people for games is also improved, especially embodied on the game picture.

Generally, for games based on three-dimensional virtual scenes, interface contents of different perspectives are involved for users to use, such as: the main transformation process of the first person perspective, the third person perspective and the like is realized based on the transformation of the camera angle.

However, in the process of changing the view angle, the model accuracy corresponding to the virtual object in the virtual scene is limited, but when the user zooms in the view angle, the model may be blurred, which affects the accuracy of the model display in the virtual scene.

Disclosure of Invention

In view of this, the present application provides a method for managing a model in a virtual scene, which can effectively avoid unclear model caused by a change in a viewing angle, and improve accuracy of model display.

A first aspect of the present application provides a method for model management, which may be applied to a system or a program including a model management function in a terminal device, and specifically includes: displaying a first model in a virtual scene according to a first scale, wherein the first model is a model under a first visual angle and is associated with a target virtual object;

updating the first model to a second model in response to a target operation, the second model being a model at a second perspective, the second model being associated with the target virtual object, the model accuracy of the second model being greater than the model accuracy of the first model, the target virtual object being displayed at a second scale at the second perspective, the second scale being less than the first scale;

displaying the second model in the virtual scene based on the second scale.

Optionally, in some possible implementations of the present application, the updating the first model to the second model in response to the target operation includes:

responding to the target operation to acquire a trigger instruction of a first virtual element;

updating the first model to the second model based on a triggering instruction of the first virtual element.

Optionally, in some possible implementations of the present application, the updating the first model to the second model based on the trigger instruction of the first virtual element includes:

determining a first model component in the first model according to a triggering instruction of the first virtual element;

and adjusting the model precision of the first model part to obtain the second model.

Optionally, in some possible implementations of the present application, the updating the first model to the second model in response to the target operation includes:

acquiring a trigger instruction of a second virtual element in response to the target operation, wherein the second virtual element is associated with a target interaction mode;

updating the first model to the second model in the target interaction mode based on a triggering instruction of the second virtual element.

Optionally, in some possible implementations of the present application, the updating the first model to the second model in the target interaction mode based on the trigger instruction of the second virtual element includes:

determining a second model component of the first model in the target interaction mode, the second model component being associated with the first model component;

and updating the second model component based on the model precision corresponding to the second model to obtain the second model.

Optionally, in some possible implementation manners of the present application, the updating the second model component based on the model precision corresponding to the second model to obtain the second model includes:

updating the second model component based on the model precision corresponding to the second model to obtain the updated second model component;

updating a third model component corresponding to the updated second model component to obtain the second model, wherein the third model component and the second model component have different description dimensions for the virtual object, and the third model component is associated with the first model component.

Optionally, in some possible implementations of the present application, the obtaining a trigger instruction of a second virtual element in response to the target operation includes:

responding to the target operation to acquire a trigger instruction of a third virtual element, wherein the third virtual element is used for indicating element interaction in the virtual scene;

and acquiring the trigger instruction of the second virtual element based on the trigger instruction of the third virtual element.

Optionally, in some possible implementations of the present application, the method further includes:

if the triggering of the third virtual element is stopped, updating the second model to the first model;

displaying the first model in the virtual scene according to the first scale.

Optionally, in some possible implementations of the present application, the method further includes:

a fourth model component that determines that the second model is not displayed in the virtual scene at the second scale;

hiding the fourth model part.

Optionally, in some possible implementations of the present application, the method further includes:

acquiring interaction information of the target virtual object in the virtual scene, wherein the interaction information is obtained based on at least one virtual object in the virtual scene;

and if the interactive information meets a preset condition, switching to the first visual angle for displaying.

Optionally, in some possible implementations of the present application, the method further includes:

acquiring a visual angle rule corresponding to the virtual scene;

indicating a switch of the first model and the second model according to the perspective rule.

Optionally, in some possible implementation manners of the present application, the virtual scene is a shooting game scene, the first perspective is a third-person perspective, and the second perspective is a first-person perspective.

A second aspect of the present application provides an apparatus for model management, comprising: the display unit is used for displaying a first model in a virtual scene according to a first proportion, wherein the first model is a model under a first visual angle, and the first model is associated with a target virtual object;

a management unit, configured to update the first model to a second model in response to a target operation, the second model being a model at a second view angle, the second model being associated with the target virtual object, the model accuracy of the second model being greater than the model accuracy of the first model, the target virtual object being displayed at a second scale at the second view angle, the second scale being smaller than the first scale;

the display unit is further configured to display the second model in the virtual scene based on the second scale.

Optionally, in some possible implementations of the present application, the management unit is specifically configured to obtain a trigger instruction of a first virtual element in response to the target operation;

the management unit is specifically configured to update the first model to the second model based on a trigger instruction of the first virtual element.

Optionally, in some possible implementations of the present application, the management unit is specifically configured to determine a first model component in the first model according to a trigger instruction of the first virtual element;

the management unit is specifically configured to adjust the model accuracy of the first model component to obtain the second model.

Optionally, in some possible implementations of the present application, the management unit is specifically configured to obtain a trigger instruction of a second virtual element in response to the target operation, where the second virtual element is associated with a target interaction mode;

the management unit is specifically configured to update the first model to the second model in the target interaction mode based on a trigger instruction of the second virtual element.

Optionally, in some possible implementations of the present application, the management unit is specifically configured to determine a second model component of the first model in the target interaction mode, where the second model component is associated with the first model component;

the management unit is specifically configured to update the second model component based on the model accuracy corresponding to the second model to obtain the second model.

Optionally, in some possible implementation manners of the present application, the management unit is specifically configured to update the second model component based on the model precision corresponding to the second model, so as to obtain the updated second model component;

the management unit is specifically configured to update a third model component corresponding to the updated second model component to obtain the second model, where the third model component and the second model component have different description dimensions for the virtual object, and the third model component is associated with the first model component.

Optionally, in some possible implementations of the present application, the management unit is specifically configured to obtain a trigger instruction of a third virtual element in response to the target operation, where the third virtual element is used to indicate element interaction in the virtual scene;

the management unit is specifically configured to obtain the trigger instruction of the second virtual element based on the trigger instruction of the third virtual element.

Optionally, in some possible implementation manners of the present application, the management unit is specifically configured to update the second model to the first model if the triggering of the third virtual element is stopped;

the management unit is specifically configured to display the first model in the virtual scene according to the first scale.

Optionally, in some possible implementations of the present application, the management unit is specifically configured to determine a fourth model component of the second model, which is not displayed in the virtual scene at the second scale;

the management unit is specifically configured to hide the fourth model component.

Optionally, in some possible implementation manners of the present application, the management unit is specifically configured to acquire interaction information of the target virtual object in the virtual scene, where the interaction information is obtained based on interaction of at least one virtual object in the virtual scene;

the management unit is specifically configured to switch to the first viewing angle for display if the interaction information meets a preset condition.

Optionally, in some possible implementation manners of the present application, the management unit is specifically configured to obtain a view rule corresponding to the virtual scene;

the management unit is specifically configured to instruct, according to the view rule, switching between the first model and the second model.

A third aspect of the present application provides a computer device comprising: a memory, a processor, and a bus system; the memory is used for storing program codes; the processor is configured to perform the method of model management of the first aspect or any of the first aspects according to instructions in the program code.

A fourth aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of model management of the first aspect or any of the first aspects described above.

According to the technical scheme, the embodiment of the application has the following advantages:

displaying a first model in the virtual scene according to a first scale, wherein the first model is a model under a first visual angle, and the first model is associated with a target virtual object; then updating the first model to a second model in response to the target operation, wherein the second model is a model at a second view angle, the second model is associated with the target virtual object, the model accuracy of the second model is greater than the model accuracy of the first model, the target virtual object is displayed at a second scale at the second view angle, and the second scale is smaller than the first scale; the second model is further displayed in the virtual scene based on the second scale. Therefore, the model conversion process in the visual angle conversion process is realized, the model range corresponding to the switched visual angle is small, the accuracy is required to be higher, and the model accuracy of the corresponding second model is higher than that of the first model, so that the definition of model display after the visual angle is switched is ensured, and the accuracy of the model in the virtual scene is improved.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.

FIG. 1 is a network architecture diagram of the operation of a model management system;

FIG. 2 is a system architecture diagram of a model management provided by an embodiment of the present application;

FIG. 3 is a flow chart of a method for model management provided by an embodiment of the present application;

fig. 4 is a schematic view of a model management scenario provided in an embodiment of the present application;

fig. 5 is a schematic view of another scenario of model management provided in an embodiment of the present application;

FIG. 6 is a schematic view of another model management scenario provided in an embodiment of the present application;

FIG. 7 is a schematic view of another model management scenario provided in an embodiment of the present application;

FIG. 8 is a flow chart of another method for model management provided by an embodiment of the present application;

FIG. 9 is a schematic view of another model management scenario provided in an embodiment of the present application;

FIG. 10 is a schematic view of another model management scenario provided in an embodiment of the present application;

fig. 11 is a schematic structural diagram of a model management apparatus according to an embodiment of the present application;

fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application.

Detailed Description

The embodiment of the application provides a model management method and a related device, which can be applied to a system or a program containing a model management function in terminal equipment, and a first model is displayed in a virtual scene according to a first proportion, wherein the first model is a model under a first visual angle, and the first model is associated with a target virtual object; then updating the first model to a second model in response to the target operation, wherein the second model is a model at a second view angle, the second model is associated with the target virtual object, the model accuracy of the second model is greater than the model accuracy of the first model, the target virtual object is displayed at a second scale at the second view angle, and the second scale is smaller than the first scale; the second model is further displayed in the virtual scene based on the second scale. Therefore, the model conversion process in the visual angle conversion process is realized, the model range corresponding to the switched visual angle is small, the accuracy is required to be higher, and the model accuracy of the corresponding second model is higher than that of the first model, so that the definition of model display after the visual angle is switched is ensured, and the accuracy of the model in the virtual scene is improved.

The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

It should be understood that the model management method provided by the present application may be applied to a system or a program including a model management function in a terminal device, such as a shooting game, specifically, the model management system may operate in a network architecture as shown in fig. 1, which is a network architecture diagram of the model management system, as can be seen from the diagram, the model management system may provide model management with a plurality of information sources, the terminal establishes a connection with a server through a network, and further receives scene information sent by the server, and a user may select different viewing angles to observe according to different scenes, thereby performing a corresponding process of model switching; it is understood that, fig. 1 shows various terminal devices, in an actual scenario, there may be more or fewer types of terminal devices participating in the process of model management, and the specific number and type depend on the actual scenario, which is not limited herein, and in addition, fig. 1 shows one server, but in an actual scenario, there may also be participation of multiple servers, especially in a scenario of multi-content application interaction, the specific number of servers depends on the actual scenario.

In this embodiment, the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.

It should be noted that the model management method provided in this embodiment may also be performed offline, that is, without the participation of a server, at this time, the terminal is connected with other terminals locally, and then the process of model management between terminals is performed.

It will be appreciated that the model management system described above may be run on a personal mobile terminal, for example: the application, such as a shooting game, can run on a server, and can also run on a third-party device to provide model management so as to obtain the model management processing result of an information source; the specific model management system may be operated in the device in the form of a program, may also be operated as a system component in the device, and may also be used as one of cloud service programs, and a specific operation mode is determined according to an actual scene, which is not limited herein.

With the development of internet technology, more and more games appear in the life of people, and the experience demand of people for games is also improved, especially embodied on the game picture.

Generally, for games based on three-dimensional virtual scenes, interface contents of different perspectives are involved for users to use, such as: the main transformation process of the first person perspective, the third person perspective and the like is realized based on the transformation of the camera angle.

However, in the process of changing the view angle, the model accuracy corresponding to the virtual object in the virtual scene is limited, but when the user zooms in the view angle, the model may be blurred, which affects the accuracy of the model display in the virtual scene.

In order to solve the above problem, the present application provides a method for model management, which is applied to a flow framework of model management shown in fig. 2, as shown in fig. 2, for a system architecture diagram of model management provided in an embodiment of the present application, a user may control a virtual object in a virtual scene through an interface layer, so that an application layer performs corresponding model switching in response to a target operation; it is understood that the first model and the second model are shown in the figure, but more models can participate in the actual scene, and are not limited herein.

It can be understood that the method provided by the present application may be a program written as a processing logic in a hardware system, or may be a model management device that implements the processing logic in an integrated or external manner. As one implementation, the model management apparatus displays a first model in a virtual scene according to a first scale, wherein the first model is a model at a first view angle, and the first model is associated with a target virtual object; then updating the first model to a second model in response to the target operation, wherein the second model is a model at a second view angle, the second model is associated with the target virtual object, the model accuracy of the second model is greater than the model accuracy of the first model, the target virtual object is displayed at a second scale at the second view angle, and the second scale is smaller than the first scale; the second model is further displayed in the virtual scene based on the second scale. Therefore, the model conversion process in the visual angle conversion process is realized, the model range corresponding to the switched visual angle is small, the accuracy is required to be higher, and the model accuracy of the corresponding second model is higher than that of the first model, so that the definition of model display after the visual angle is switched is ensured, and the accuracy of the model in the virtual scene is improved.

With reference to the above flow architecture, the following describes a method for model management in the present application, please refer to fig. 3, fig. 3 is a flow chart of a method for model management provided in the present application, the present embodiment takes a shooting game as an application scenario for description, it should be noted that the present application is also applicable to other scenarios including view angle change; the embodiment of the application at least comprises the following steps:

301. the first model is displayed in the virtual scene according to a first scale.

In this embodiment, the first model is a model at a first perspective, and the first model is associated with the target virtual object; wherein the first perspective may be a third perspective, i.e. the player controls the target virtual object in the virtual scene at the third perspective, for example: soldiers in the battlefield.

It will be appreciated that since the model from the third person perspective may not display the complete model, for example only the upper body of the soldier model, the first model is displayed in the virtual scene according to the first scale. As shown in fig. 4, a scene schematic diagram of a model management method provided in this embodiment of the present application shows a first model a1, a model display a2 from a first perspective, and a model display A3 from a second perspective, and it can be understood that a database includes not only the first model a1, but also a series of models that are adjusted in precision based on the first model, and components included in the models, for example, components of a soldier model include soldiers and firearms. Correspondingly, the first ratio is the ratio of the display range of the first model a1 and the model display a2 at the first viewing angle, that is, the ratio of the number of model elements corresponding to the model display a2 at the first viewing angle to the number of model elements of the first model a1 is generally about large, and the number of the parts of the model displayed in the virtual scene is more, but the corresponding relative size is smaller, and the accuracy requirement is not high; correspondingly, the smaller the proportion, the less the portion of the model that is displayed in the virtual scene, but the larger the relative size of the model, the higher the accuracy requirement. For example, the model display A2 at the first perspective shown in FIG. 4 is the upper body half corresponding to the first model A1, and is relatively small in size; the model display A3 at the second viewing angle is the firearm part corresponding to the first model a1, and has a relatively large size, so the model display A3 at the second viewing angle should adopt a high-precision model.

It is to be understood that the model display a2 in the first viewing angle in fig. 4 described above may be referred to as a model display in the third viewing angle, and the model display A3 in the second viewing angle may be referred to as a model display in the first viewing angle. However, the present application is not limited to these two viewing angle displays, and a specific scene including a viewing angle change can be performed by using the present solution, for example, a zooming-in observation process at the viewing angle of the third person.

302. The first model is updated to the second model in response to the target operation.

In this embodiment, the second model is a model at a second viewing angle, the second model is associated with the target virtual object, the model precision of the second model is greater than the model precision of the first model, the target virtual object is displayed at a second scale at the second viewing angle, and the second scale is smaller than the first scale; the second ratio is the ratio of the model display A3 and the first model a1 at the second viewing angle shown in fig. 4, that is, the model display a2 at the first viewing angle is switched to the model display A3 at the second viewing angle in response to the target operation, and compared with general viewing angle conversion, the method and the device switch the corresponding precision model at the same time of the viewing angle conversion, so that the precision of the model can be well adapted to the current interactive scene, the occurrence of unclear model is avoided, and the user experience is improved.

It can be understood that, the above embodiment describes the process of switching the first model to the second model, but in an actual scene, the displayed model may also be switched to a model with lower precision in the process of switching from the second model to the first model according to a target operation, for example, switching the first person perspective back to the third person perspective, and at this time, the resource occupation of the game process is saved on the premise of not affecting the definition of the model.

In a possible scenario, the target operation may be performed in response to a virtual element in the virtual interface, as shown in fig. 5, which is a schematic view of a scenario of another method for managing a model provided in the embodiment of the present application, and a first virtual element B1 and a second model B2 are shown in the diagram, specifically, a trigger instruction of the first virtual element is obtained in response to the target operation; the first model is then updated to the second model based on the triggering instruction of the first virtual element.

Optionally, the second model B2 in the display interface is a part of the complete second model, so that only the displayed part may be replaced during the process of updating the first model into the second model. Specifically, a first model component in a first model is determined according to a trigger instruction of a first virtual element; the model accuracy of the first model component is then adjusted to obtain a second model. For example, the first model component in fig. 5 is a model of the gun, so that only the model of the gun is loaded and the corresponding portion in the first model is replaced to obtain the second model, thereby improving the efficiency of model updating.

In another possible scenario, the target operation may be a trigger obtained through a certain function association, as shown in fig. 6, which is a schematic view of a scenario of another model management method provided in this embodiment of the present application, and a second virtual element C1 and a second model C2 in a target interaction mode are shown in the diagram, specifically, a trigger instruction of the second virtual element C1 is first obtained in response to the target operation, where the second virtual element is associated with the target interaction mode; the first model is then updated to a second model C2 in the target interaction mode based on the triggering instruction of the second virtual element. For example, the second virtual element C1 is the open mirror button, the target interaction mode is the scope mode of the firearm, and the second model C2 in the target interaction mode is the scope portion of the firearm model.

Optionally, in the target interaction mode, the virtual scene may be directly converted from the third person's perspective to the scope, so that the model corresponding to the scope may be loaded first. Specifically, a second model component of the first model in the target interaction mode is determined; and then updating the second model component based on the model precision corresponding to the second model to obtain the second model. I.e. first the telescope model in the second model is loaded and then the rest of the second model is loaded in sequence.

It can be understood that the loading of the remaining portion of the second model may not be performed, and the remaining portion of the second model may be directly replaced by the corresponding portion of the first model, and the specific manner is determined by the actual scene.

It should be noted that in the process of not loading the second model component, since the target mode may be switched back to the second view after the target mode is ended, the second model in the second view needs to be updated. Specifically, the second model component is updated based on the model accuracy corresponding to the second model to obtain an updated second model component; and then updating a third model component corresponding to the updated second model component to obtain a second model, wherein the third model component and the second model component have different description dimensions for the virtual object. The describing dimension of the third model component for the virtual object can be a gun, and the describing dimension of the second model component for the virtual object can be a sighting telescope, so that rapid switching between different precision models during view angle switching is realized.

303. The second model is displayed in the virtual scene based on the second scale.

In this embodiment, the second ratio is the ratio of the model display A3 and the first model a1 at the second viewing angle in fig. 4, and the specific ratio may be set according to the actual application, that is, only the firearm portion, the combination of the firearm portion and the hand, or the muzzle portion may be displayed, and the specific display mode is determined according to the actual scene.

With reference to the foregoing embodiments, a first model is displayed in a virtual scene according to a first scale, where the first model is a model at a first view angle, and the first model is associated with a target virtual object; then updating the first model to a second model in response to the target operation, wherein the second model is a model at a second view angle, the second model is associated with the target virtual object, the model accuracy of the second model is greater than the model accuracy of the first model, the target virtual object is displayed at a second scale at the second view angle, and the second scale is smaller than the first scale; further, the second model is displayed in the virtual scene based on the second scale. Therefore, the model conversion process in the visual angle conversion process is realized, the model range corresponding to the switched visual angle is small, the accuracy is required to be higher, and the model accuracy of the corresponding second model is higher than that of the first model, so that the definition of model display after the visual angle is switched is ensured, and the accuracy of the model in the virtual scene is improved.

In another possible scenario, based on the embodiment shown in fig. 3, the first virtual element, the second virtual element, and the interaction process in the virtual scenario may be combined to perform model management, for example, a shooting game in a shooting interactive mode, as shown in fig. 7, a scenario diagram of another model management method provided in the embodiment of the present application is shown, in which a third virtual element D1 is shown, and an interface display D2 of the second model in the element interaction process is shown. Specifically, a trigger instruction of a third virtual element D1 is obtained in response to a target operation, where the third virtual element is used to indicate element interaction in a virtual scene; then, a trigger instruction of the second virtual element is obtained based on the trigger instruction of the third virtual element, namely, the mirror-opening model is opened, so that an interface display D2 of the second model in the element interaction process, namely, a firearm model in a firing state is displayed, the conversion from the third person operation to the first person operation is realized, the corresponding models are switched, and the definition of the models is ensured.

For the process of model switching in this scenario, reference may be made to a flowchart shown in fig. 8, which is a flowchart of another method for model management provided in this embodiment of the present application, as shown in fig. 8. Firstly, a player operates by adopting a third person weighing view angle, at the moment, the interface adopts a first model for display, and then whether the user presses a firing key is judged, namely a third virtual element; if the first person model is pressed down, the first person model, namely the second person model, is started to be loaded; in the loading process, the loading completion condition is checked, and if the loading is completed, the first-person viewing angle is switched to and the open mirror mode is immediately started; the fire is started immediately in the open mirror model, the trigger state of the third virtual element is checked in real time, and if the trigger is finished, namely hands are released, the third person weighing model is loaded; and further judging the loading completion condition of the third person scale model, and switching back to the third person scale view angle, thereby finishing the response process of the third virtual element.

In another possible scenario, based on the embodiment shown in fig. 3 or fig. 4, a portion of the second model that is not displayed in the virtual scenario may be hidden without loading the model, so as to save system resources. Specifically, as shown in fig. 9, for a scene schematic diagram of another method for model management provided in this embodiment of the application, for the second model E1, a portion E2 displayed in the virtual scene and a portion E3 not displayed in the virtual scene are included, that is, a fourth model component of the second model not displayed in the virtual scene at the second scale, that is, a portion E3 not displayed in the virtual scene, is first determined; the fourth model part is then hidden.

It will be appreciated that the fourth model part, although hidden, still contains a corresponding crash box E4, i.e. participates in the interaction of elements in a virtual scene, for example in a shooting game, although the body part is not loaded from the perspective of the first person, it can still be hit.

In a possible scenario, after the interaction information of the target virtual object in the virtual scenario meets a certain condition, the view angle switching and the corresponding mode switching can be automatically performed. Specifically, the interaction information is obtained based on at least one virtual object in the virtual scene, for example, after a user-controlled character is killed, the user-controlled character is automatically switched to a third person name view angle to be displayed, that is, a first view angle, and corresponding model switching is performed.

It can be understood that in a scenario of multiple user interactions, a model of a user hidden by an observation model of another user is not changed, as shown in fig. 10, a scenario diagram of another model management method provided in the embodiment of the present application is shown, in which a display F1 of a model corresponding to a user performing perspective switching in another user interface is shown, so that stability of multiple user interactions in a virtual scenario is ensured.

In another possible scenario, based on the embodiment shown in fig. 3 or fig. 4, the switching process of the model may also be caused for switching between different virtual scenarios. Specifically, firstly, a visual angle rule corresponding to a virtual scene is acquired; switching between the first model and the second model is then instructed according to the perspective rule. For example, when a user-controlled character enters the vehicle, the third person viewing angle is automatically switched to, and a specific switching manner is determined according to an actual scene, which is not limited herein.

In order to better implement the above-mentioned aspects of the embodiments of the present application, the following also provides related apparatuses for implementing the above-mentioned aspects. Referring to fig. 11, fig. 11 is a schematic structural diagram of a model management apparatus according to an embodiment of the present application, where the model management apparatus 1100 includes:

a display unit 1101, configured to display a first model in a virtual scene according to a first scale, where the first model is a model at a first viewing angle, and the first model is associated with a target virtual object;

a management unit 1102, configured to update the first model to a second model in response to a target operation, where the second model is a model at a second view angle, the second model is associated with the target virtual object, model accuracy of the second model is greater than model accuracy of the first model, the target virtual object is displayed at a second scale at the second view angle, and the second scale is smaller than the first scale;

the display unit 1101 is further configured to display the second model in the virtual scene based on the second scale.

Optionally, in some possible implementations of the present application, the management unit 1102 is specifically configured to obtain a trigger instruction of a first virtual element in response to the target operation;

the management unit 1102 is specifically configured to update the first model to the second model based on a trigger instruction of the first virtual element.

Optionally, in some possible implementations of the present application, the management unit 1102 is specifically configured to determine a first model component in the first model according to a trigger instruction of the first virtual element;

the management unit 1102 is specifically configured to adjust the model accuracy of the first model component to obtain the second model.

Optionally, in some possible implementations of the present application, the management unit 1102 is specifically configured to obtain a trigger instruction of a second virtual element in response to the target operation, where the second virtual element is associated with a target interaction mode;

the management unit 1102 is specifically configured to update the first model to the second model in the target interaction mode based on a trigger instruction of the second virtual element.

Optionally, in some possible implementations of the present application, the management unit 1102 is specifically configured to determine a second model component of the first model in the target interaction mode, where the second model component is associated with the first model component;

the management unit 1102 is specifically configured to update the second model component based on the model precision corresponding to the second model, so as to obtain the second model.

Optionally, in some possible implementation manners of the present application, the management unit 1102 is specifically configured to update the second model component based on the model precision corresponding to the second model, so as to obtain an updated second model component;

the management unit 1102 is specifically configured to update a third model component corresponding to the updated second model component to obtain the second model, where the third model component and the second model component have different description dimensions for the virtual object, and the third model component is associated with the first model component.

Optionally, in some possible implementations of the present application, the management unit 1102 is specifically configured to obtain a trigger instruction of a third virtual element in response to the target operation, where the third virtual element is used to indicate element interaction in the virtual scene;

the management unit 1102 is specifically configured to obtain the trigger instruction of the second virtual element based on the trigger instruction of the third virtual element.

Optionally, in some possible implementation manners of the present application, the management unit 1102 is specifically configured to update the second model to the first model if the triggering of the third virtual element is stopped;

the management unit 1102 is specifically configured to display the first model in the virtual scene according to the first ratio.

Optionally, in some possible implementations of the present application, the management unit 1102 is specifically configured to determine a fourth model component of the second model, which is not displayed in the virtual scene at the second scale;

the management unit 1102 is specifically configured to hide the fourth model component.

Optionally, in some possible implementation manners of the present application, the management unit 1102 is specifically configured to obtain interaction information of the target virtual object in the virtual scene, where the interaction information is obtained based on interaction of at least one virtual object in the virtual scene;

the management unit 1102 is specifically configured to switch to the first viewing angle for display if the interaction information meets a preset condition.

Optionally, in some possible implementation manners of the present application, the management unit 1102 is specifically configured to obtain a view rule corresponding to the virtual scene;

the management unit 1102 is specifically configured to instruct, according to the view rule, switching between the first model and the second model.

Displaying a first model in the virtual scene according to a first scale, wherein the first model is a model under a first visual angle, and the first model is associated with a target virtual object; then updating the first model to a second model in response to the target operation, wherein the second model is a model at a second view angle, the second model is associated with the target virtual object, the model accuracy of the second model is greater than the model accuracy of the first model, the target virtual object is displayed at a second scale at the second view angle, and the second scale is smaller than the first scale; the second model is further displayed in the virtual scene based on the second scale. Therefore, the model conversion process in the visual angle conversion process is realized, the model range corresponding to the switched visual angle is small, the accuracy is required to be higher, and the model accuracy of the corresponding second model is higher than that of the first model, so that the definition of model display after the visual angle is switched is ensured, and the accuracy of the model in the virtual scene is improved.

An embodiment of the present application further provides a terminal device, where the terminal is configured to implement the method, as shown in fig. 12, the terminal is a schematic structural diagram of another terminal device provided in the embodiment of the present application, and for convenience of description, only a part related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to a method part in the embodiment of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a point of sale (POS), a vehicle-mounted computer, and the like, taking the terminal as the mobile phone as an example:

fig. 12 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to fig. 12, the cellular phone includes: radio Frequency (RF) circuitry 1210, memory 1220, input unit 1230, display unit 1240, sensors 1250, audio circuitry 1260, wireless fidelity (WiFi) module 1270, processor 1280, and power supply 1290. Those skilled in the art will appreciate that the handset configuration shown in fig. 12 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

The following describes each component of the mobile phone in detail with reference to fig. 12:

the RF circuit 1210 is configured to receive and transmit signals during information transmission and reception or during a call, and in particular, receive downlink information of a base station and then process the received downlink information to the processor 1280; in addition, the data for designing uplink is transmitted to the base station. Typically, the RF circuit 1210 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 1210 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), etc.

The memory 1220 may be used to store software programs and modules, and the processor 1280 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1220. The memory 1220 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1220 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.

The input unit 1230 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1230 may include a touch panel 1231 and other input devices 1232. The touch panel 1231, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1231 using any suitable object or accessory such as a finger, a stylus, etc., and a range of spaced touch operations on the touch panel 1231) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 1231 may include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1280, and can receive and execute commands sent by the processor 1280. In addition, the touch panel 1231 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1230 may include other input devices 1232 in addition to the touch panel 1231. In particular, other input devices 1232 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.

The display unit 1240 may be used to display information input by the user or information provided to the user and various menus of the cellular phone. The display unit 1240 may include a display panel 1241, and optionally, the display panel 1241 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like. Further, touch panel 1231 can overlay display panel 1241, and when touch panel 1231 detects a touch operation thereon or nearby, the touch panel 1231 can transmit the touch operation to processor 1280 to determine the type of the touch event, and then processor 1280 can provide a corresponding visual output on display panel 1241 according to the type of the touch event. Although in fig. 12, the touch panel 1231 and the display panel 1241 are implemented as two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1231 and the display panel 1241 may be integrated to implement the input and output functions of the mobile phone.

The cell phone may also include at least one sensor 1250, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1241 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1241 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.

Audio circuitry 1260, speaker 1261, and microphone 1262 can provide an audio interface between a user and a cell phone. The audio circuit 1260 can transmit the received electrical signal converted from the audio data to the speaker 1261, and the audio signal is converted into a sound signal by the speaker 1261 and output; on the other hand, the microphone 1262 converts the collected sound signals into electrical signals, which are received by the audio circuit 1260 and converted into audio data, which are processed by the audio data output processor 1280, and then passed through the RF circuit 1210 to be transmitted to, for example, another cellular phone, or output to the memory 1220 for further processing.

WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 1270, and provides wireless broadband internet access for the user. Although fig. 12 shows the WiFi module 1270, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the invention.

The processor 1280 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1220 and calling data stored in the memory 1220, thereby performing overall monitoring of the mobile phone. Optionally, processor 1280 may include one or more processing units; optionally, the processor 1280 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into the processor 1280.

The mobile phone further includes a power supply 1290 (e.g., a battery) for supplying power to each component, and optionally, the power supply may be logically connected to the processor 1280 through a power management system, so that the power management system may manage functions such as charging, discharging, and power consumption management.

Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.

In this embodiment, the processor 1280 included in the terminal further has a function of executing each step of the page processing method.

Also provided in the embodiments of the present application is a computer-readable storage medium, which stores therein model management instructions, and when the computer-readable storage medium is executed on a computer, the computer is caused to execute the steps executed by the model management apparatus in the method described in the foregoing embodiments shown in fig. 3 to 10.

Also provided in embodiments of the present application is a computer program product including instructions for managing a model, which when run on a computer, causes the computer to perform the steps performed by the model management apparatus in the method described in the embodiments of fig. 3 to 10.

An embodiment of the present application further provides a model management system, where the model management system may include the model management apparatus in the embodiment described in fig. 11 or the terminal device described in fig. 12.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a model management apparatus, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:虚拟攻击道具的显示方法和装置、存储介质和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类