Virtual resource use control method and device, computer equipment and storage medium

文档序号:146224 发布日期:2021-10-26 浏览:30次 中文

阅读说明:本技术 虚拟资源的使用控制方法、装置、计算机设备及存储介质 (Virtual resource use control method and device, computer equipment and storage medium ) 是由 吴海山 周玉鑫 于 2021-08-04 设计创作,主要内容包括:本申请实施例公开了一种虚拟资源的使用控制方法、装置、计算机设备及存储介质,方法包括:响应于对触发条件的设置操作,确定目标虚拟资源在所述虚拟场景中产生使用效果的触发条件,所述触发条件为所述目标虚拟资源与所述虚拟场景的预设交互信息;响应于对所述目标虚拟资源的投放操作,确定所述目标虚拟资源与所述虚拟场景的实际交互信息;若所述实际交互信息满足所述触发条件,确定所述目标虚拟资源在所述虚拟场景中的使用位置;根据所述使用位置,在所述虚拟场景中渲染所述目标虚拟资源的使用效果,对玩家投放虚拟资源的时机没有较高要求,可以避免玩家因为虚拟资源的投放时机不当而误伤自己或无法伤害到敌方,提高虚拟资源的实用性和易用性。(The embodiment of the application discloses a method and a device for controlling the use of virtual resources, computer equipment and a storage medium, wherein the method comprises the following steps: responding to the setting operation of a trigger condition, and determining the trigger condition that a target virtual resource generates a use effect in the virtual scene, wherein the trigger condition is preset interaction information of the target virtual resource and the virtual scene; responding to the launching operation of the target virtual resource, and determining the actual interaction information of the target virtual resource and the virtual scene; if the actual interaction information meets the trigger condition, determining the use position of the target virtual resource in the virtual scene; according to the using position, the using effect of the target virtual resources is rendered in the virtual scene, the time for the player to put in the virtual resources is not required to be high, the phenomenon that the player accidentally injures the player or cannot injure the enemy due to improper putting time of the virtual resources can be avoided, and the practicability and the usability of the virtual resources are improved.)

1. A method for controlling the use of virtual resources, wherein a graphical user interface is provided through a terminal device, the graphical user interface comprises a virtual scene and a virtual object positioned in the virtual scene, and the virtual object is configured to respond to a touch operation aiming at the graphical user interface to execute game behaviors, and the method comprises the following steps:

responding to the setting operation of a trigger condition, and determining the trigger condition that a target virtual resource generates a use effect in the virtual scene, wherein the trigger condition is preset interaction information of the target virtual resource and the virtual scene;

responding to the launching operation of the target virtual resource, and determining the actual interaction information of the target virtual resource and the virtual scene;

if the actual interaction information meets the trigger condition, determining the use position of the target virtual resource in the virtual scene;

and according to the use position, rendering the use effect of the target virtual resource in the virtual scene.

2. The method according to claim 1, wherein the preset interaction information includes a number of collisions, the virtual scene includes scene elements, and the determining, in response to the setting operation of the trigger condition, the trigger condition for the target virtual resource to generate the usage effect in the virtual scene includes:

responding to the setting operation of the trigger condition, and determining the preset collision times of the target virtual resource which needs to collide with the scene element before the use effect is generated in the virtual scene;

and taking the preset collision times as a trigger condition for the target virtual resource to generate the use effect in the virtual scene.

3. The method of claim 2, wherein the graphical user interface comprises a collision setting control, and wherein the determining, in response to the setting operation of the trigger condition, a preset number of collisions the target virtual resource needs to collide with the scene element before the usage effect is generated in the virtual scene comprises:

acquiring a first corresponding relation between the touch times of the collision setting control and the preset collision times;

and responding to the touch times of the collision setting control, and determining the preset collision times of the target virtual resource which needs to collide with the scene element before the target virtual resource generates the use effect in the virtual scene according to the first corresponding relation.

4. The method of claim 2, wherein the determining the actual interaction information of the target virtual resource with the virtual scene in response to the placement operation on the target virtual resource comprises:

responding to the releasing operation of the target virtual resource, and determining the actual collision frequency of the target virtual resource colliding with the scene element in the releasing process;

and taking the actual collision times as actual interaction information of the target virtual resource and the virtual scene.

5. The method according to claim 4, wherein the determining the usage location of the target virtual resource in the virtual scene if the actual interaction information satisfies the trigger condition includes:

if the actual collision frequency is larger than or equal to the preset collision frequency, determining that the position of the target virtual resource is a candidate use position when the collision frequency of the target virtual resource and the scene element reaches the preset collision frequency in the releasing process of the target virtual resource;

and determining the position of the target virtual resource in the target preset time after the candidate use position in the release process, wherein the position is the use position of the target virtual resource in the virtual scene.

6. The method according to claim 5, wherein the determining that the target virtual resource is located at the position within the target preset time after the candidate using position in the delivering process before the using position of the target virtual resource in the virtual scene further comprises:

after the target virtual resource reaches the candidate use position in the releasing process, acquiring a second corresponding relation between the preset collision times and preset time, wherein the preset time is used for indicating the time from the candidate use position to the use position of the target virtual resource in the releasing process;

and determining the preset collision times of the target virtual resource according to the second corresponding relation, and taking the preset time from the candidate using position to the using position as the target preset time.

7. The method according to claim 4, wherein before determining an actual number of collisions that the target virtual resource collides with the scene element during the release operation in response to the release operation on the target virtual resource, further comprising:

displaying first interaction track prompt information of the target virtual resource and the virtual scene on the graphical user interface, wherein the first interaction track prompt information comprises a first collision point identifier of collision of the target virtual resource and the scene element;

and determining the actual collision frequency of the target virtual resource colliding with the scene element in the launching process according to the first collision point identifier.

8. The method of claim 4, wherein the graphical user interface comprises a scene view control, and wherein determining the actual number of collisions that the target virtual resource may collide with the scene element during the launch process in response to the launch operation on the target virtual resource further comprises:

displaying second interaction track prompt information of the target virtual resource and the virtual scene in the scene viewing control, wherein the second interaction track prompt information comprises a second collision point identifier of the target virtual resource colliding with the scene element and a use position identifier of the target virtual resource;

and adjusting the using position identification in response to the adjusting operation of the triggering condition.

9. The method according to claim 8, wherein before rendering the usage effect of the target virtual resource in the virtual scene according to the usage position, further comprising:

responding to the touch operation of the scene viewing control, and amplifying the prompt information of the second interaction track;

changing the position of the using position identifier in the second interaction track prompt message in response to the moving operation of the using position identifier;

determining the corresponding position of the position after the use position mark is changed in the virtual scene, wherein the position is the updated use position;

the rendering the usage effect of the target virtual resource in the virtual scene according to the usage position comprises:

and rendering the use effect of the target virtual resource in the virtual scene according to the updated use position.

10. The method according to claim 1, wherein the preset interaction information includes a flight distance, and the determining the trigger condition for the target virtual resource to generate the usage effect in the virtual scene in response to the setting operation of the trigger condition includes:

in response to the setting operation of the flying distance, determining a preset distance which is required for the target virtual resource to fly in the virtual scene before the use effect is generated in the virtual scene;

and taking the preset distance as a trigger condition for the target virtual resource to generate the use effect in the virtual scene.

11. The method of claim 10, wherein the determining the actual interaction information of the target virtual resource with the virtual scene in response to the placement operation on the target virtual resource comprises:

determining the actual flying distance of the target virtual resource in the launching process in response to the launching operation of the target virtual resource;

and taking the actual distance as actual interaction information of the target virtual resource and the virtual scene.

12. The method of claim 11, wherein the determining the usage location of the target virtual resource in the virtual scene if the actual interaction information satisfies the trigger condition comprises:

and if the actual distance is greater than or equal to the preset distance, determining that the position of the target virtual resource is the use position when the flight distance of the target virtual resource reaches the preset distance in the launching process of the target virtual resource.

13. An apparatus for controlling use of a virtual resource, wherein a graphical user interface is provided by a terminal device, the graphical user interface includes a virtual scene and a virtual object located in the virtual scene, and the virtual object is configured to execute a game behavior in response to a touch operation with respect to the graphical user interface, the apparatus comprising:

the device comprises a first determining unit, a second determining unit and a control unit, wherein the first determining unit is used for responding to the setting operation of a trigger condition, and the trigger condition that a target virtual resource generates a use effect in the virtual scene is determined, and the trigger condition is preset interaction information of the target virtual resource and the virtual scene;

a second determining unit, configured to determine, in response to a delivery operation on the target virtual resource, actual interaction information of the target virtual resource and the virtual scene;

a third determining unit, configured to determine a use position of the target virtual resource in the virtual scene if the actual interaction information satisfies the trigger condition;

and the rendering unit is used for rendering the use effect of the target virtual resource in the virtual scene according to the use position.

14. A computer device, comprising:

a memory for storing a computer program;

a processor for implementing the steps in the method of controlling the use of virtual resources according to any one of claims 1 to 12 when executing said computer program.

15. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for controlling the use of a virtual resource according to any one of claims 1 to 12.

Technical Field

The present application relates to the field of game technologies, and in particular, to a method and an apparatus for controlling the use of virtual resources, a computer device, and a storage medium.

Background

With the development of science and technology, electronic games operated by means of electronic equipment platforms become important activities of people for leisure and entertainment, such as first-person shooting games, third-person shooting games and the like. In order to increase the interest of the game, the use of virtual resources such as virtual props and/or skills is an important playing method of the electronic game, for example, a smoke shell, a grenade and the like are put in a virtual scene of the electronic game. However, in the electronic game, when virtual resources such as flash bombs are released, the time of occurrence of the effect is fixed, so that a player has a high demand for the time of releasing the virtual resources, and the player may accidentally injure himself or cannot injure enemies due to improper releasing time of the virtual resources, which makes the virtual resources less practical.

Disclosure of Invention

The embodiment of the application provides a method and a device for controlling the use of virtual resources, computer equipment and a storage medium, which can enable a player in a game to accurately launch a target virtual resource to a certain position of a virtual scene.

The embodiment of the application provides a method for controlling the use of virtual resources, a terminal device provides a graphical user interface, the graphical user interface comprises a virtual scene and a virtual object positioned in the virtual scene, the virtual object is configured to respond to touch operation aiming at the graphical user interface to execute game behaviors, and the method comprises the following steps:

responding to the setting operation of a trigger condition, and determining the trigger condition that a target virtual resource generates a use effect in the virtual scene, wherein the trigger condition is preset interaction information of the target virtual resource and the virtual scene;

responding to the launching operation of the target virtual resource, and determining the actual interaction information of the target virtual resource and the virtual scene;

if the actual interaction information meets the trigger condition, determining the use position of the target virtual resource in the virtual scene;

and according to the use position, rendering the use effect of the target virtual resource in the virtual scene.

Accordingly, an embodiment of the present application further provides a device for controlling use of a virtual delivery resource, where a terminal device provides a graphical user interface, where the graphical user interface includes a virtual scene and a virtual object located in the virtual scene, and the virtual object is configured to execute a game behavior in response to a touch operation on the graphical user interface, and the device includes:

the device comprises a first determining unit, a second determining unit and a control unit, wherein the first determining unit is used for responding to the setting operation of a trigger condition, and the trigger condition that a target virtual resource generates a use effect in the virtual scene is determined, and the trigger condition is preset interaction information of the target virtual resource and the virtual scene;

a second determining unit, configured to determine, in response to a delivery operation on the target virtual resource, actual interaction information of the target virtual resource and the virtual scene;

a third determining unit, configured to determine a use position of the target virtual resource in the virtual scene if the actual interaction information satisfies the trigger condition;

and the rendering unit is used for rendering the use effect of the target virtual resource in the virtual scene according to the use position.

Optionally, the preset interaction information includes collision times, the virtual scene includes scene elements, and the first determining unit is further configured to:

responding to the setting operation of the trigger condition, and determining the preset collision times of the target virtual resource which needs to collide with the scene element before the use effect is generated in the virtual scene;

and taking the preset collision times as a trigger condition for the target virtual resource to generate the use effect in the virtual scene.

Optionally, the graphical user interface includes a collision setting control, and the first determining unit is further configured to:

acquiring a first corresponding relation between the touch times of the collision setting control and the preset collision times;

and responding to the touch times of the collision setting control, and determining the preset collision times of the target virtual resource which needs to collide with the scene element before the target virtual resource generates the use effect in the virtual scene according to the first corresponding relation.

Optionally, the second determining unit is further configured to:

responding to the releasing operation of the target virtual resource, and determining the actual collision frequency of the target virtual resource colliding with the scene element in the releasing process;

and taking the actual collision times as actual interaction information of the target virtual resource and the virtual scene.

Optionally, the third determining unit is further configured to:

if the actual collision frequency is larger than or equal to the preset collision frequency, determining that the position of the target virtual resource is a candidate use position when the collision frequency of the target virtual resource and the scene element reaches the preset collision frequency in the releasing process of the target virtual resource;

and determining the position of the target virtual resource in the target preset time after the candidate use position in the release process, wherein the position is the use position of the target virtual resource in the virtual scene.

Optionally, the third determining unit is further configured to:

after the target virtual resource reaches the candidate use position in the releasing process, acquiring a second corresponding relation between the preset collision times and preset time, wherein the preset time is used for indicating the time from the candidate use position to the use position of the target virtual resource in the releasing process;

and determining the preset collision times of the target virtual resource according to the second corresponding relation, and taking the preset time from the candidate using position to the using position as the target preset time.

Optionally, the second determining unit is further configured to:

displaying first interaction track prompt information of the target virtual resource and the virtual scene on the graphical user interface, wherein the first interaction track prompt information comprises a first collision point identifier of collision of the target virtual resource and the scene element;

and determining the actual collision frequency of the target virtual resource colliding with the scene element in the launching process according to the first collision point identifier.

Optionally, the graphical user interface includes a scene viewing control, and the second determining unit is further configured to:

displaying second interaction track prompt information of the target virtual resource and the virtual scene in the scene viewing control, wherein the second interaction track prompt information comprises a second collision point identifier of the target virtual resource colliding with the scene element and a use position identifier of the target virtual resource;

and adjusting the using position identification in response to the adjusting operation of the triggering condition.

Optionally, the rendering unit is further configured to:

responding to the touch operation of the scene viewing control, and amplifying the prompt information of the second interaction track;

changing the position of the using position identifier in the second interaction track prompt message in response to the moving operation of the using position identifier;

determining the corresponding position of the position after the use position mark is changed in the virtual scene, wherein the position is the updated use position;

and rendering the use effect of the target virtual resource in the virtual scene according to the updated use position.

Optionally, the preset interaction information includes a flight distance, and the first determining unit is further configured to:

in response to the setting operation of the flying distance, determining a preset distance which is required for the target virtual resource to fly in the virtual scene before the use effect is generated in the virtual scene;

and taking the preset distance as a trigger condition for the target virtual resource to generate the use effect in the virtual scene.

Optionally, the second determining unit is further configured to:

determining the actual flying distance of the target virtual resource in the launching process in response to the launching operation of the target virtual resource;

and taking the actual distance as actual interaction information of the target virtual resource and the virtual scene.

Optionally, the third determining unit is further configured to:

and if the actual distance is greater than or equal to the preset distance, determining that the position of the target virtual resource is the use position when the flight distance of the target virtual resource reaches the preset distance in the launching process of the target virtual resource.

Similarly, an embodiment of the present application further provides a computer device, including:

a memory for storing a computer program;

a processor for executing the steps of any one of the usage control methods of the virtual resource.

In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of any one of the methods for controlling the use of virtual resources.

The embodiment of the application provides a method and a device for controlling the use of virtual resources, computer equipment and a storage medium, wherein when a player puts in a target virtual resource, a trigger condition which needs to be met when the target virtual resource generates a use effect in a virtual scene can be preset, and no matter when the player puts in the target virtual resource at any time, the player can render the use effect of the target virtual resource at a use position which meets the trigger condition in the virtual scene only when actual interaction information of the target virtual resource and the virtual scene meets the trigger condition. The method has no higher requirement on the time for the player to put in the virtual resources, can avoid the player from accidentally injuring the player or being incapable of injuring the enemy because of improper putting time of the virtual resources, and improves the practicability and the usability of the virtual resources.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a schematic system diagram of a usage control device for a virtual delivery resource according to an embodiment of the present application;

fig. 2 is a schematic flowchart of a method for controlling the use of virtual resources according to an embodiment of the present application;

FIG. 3 is a schematic diagram of a graphical user interface provided by an embodiment of the present application;

FIG. 4 is a schematic diagram of a collision setup control provided by an embodiment of the present application;

fig. 5 is a schematic diagram of first interaction trajectory prompt information and second interaction trajectory prompt information provided in an embodiment of the present application;

fig. 6 is a schematic structural diagram of a virtual resource release control device according to an embodiment of the present application;

fig. 7 is a schematic structural diagram of a computer device provided in an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The embodiment of the application provides a method and a device for controlling the use of virtual resources, computer equipment and a storage medium. Specifically, the method for controlling the use of virtual resources according to the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution network service, big data and an artificial intelligence platform.

For example, when the usage control method of the virtual resource is run on the terminal, the terminal device stores a game application and is used for presenting a virtual scene in a graphical user interface. For example, a game application is downloaded and installed and run through the terminal device, and a virtual scene is displayed on the graphical user interface. The way in which the terminal device provides the virtual scene to the user may include various ways, for example, the virtual scene may be rendered and displayed on a display screen of the terminal device, or may be presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a virtual scene and receiving an operation instruction generated by a user acting on the graphical user interface, and a processor for executing the game, generating a game screen, responding to the operation instruction, and controlling the graphical user interface and the virtual scene to be displayed on the touch display screen.

For example, when the usage control method of the virtual resource is executed in a server, it may be a cloud game. Cloud gaming refers to a gaming regime based on cloud computing. In the running mode of the cloud game, the running main body of the game application program and the game picture presenting main body are separated, and the storage and running of the virtual resource using control method are finished on the cloud game server. The game screen presentation is performed at a cloud game client, which is mainly used for receiving and sending game data and presenting the game screen, for example, the cloud game client may be a display device with a data transmission function near a user side, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, and the like, but a terminal device for processing game data is a cloud game server at the cloud end. When a game is played, a user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the cloud game client through a network, and finally the data are decoded through the cloud game client and the game pictures are output.

Referring to fig. 1, fig. 1 is a schematic system diagram of a virtual resource release control device according to an embodiment of the present disclosure. The system may include at least one terminal 101 and at least one game server 102. The terminal 101 held by the user may be connected to the game server 102 of different games through different networks 103, for example, the network may be a wireless network or a wired network, the wireless network may be a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, or the like, the terminal 101 is configured to provide a graphical user interface, the graphical user interface includes a virtual scene and a virtual object located in the virtual scene, and the virtual object is configured to perform a game behavior in response to a touch operation for the graphical user interface; the terminal 101 responds to the setting operation of the trigger condition, and determines the trigger condition that the target virtual resource generates the use effect in the virtual scene, wherein the trigger condition is preset interaction information of the target virtual resource and the virtual scene; responding to the putting operation of the target virtual resource, and determining the actual interaction information of the target virtual resource and the virtual scene; if the actual interaction information meets the triggering condition, determining the use position of the target virtual resource in the virtual scene; and according to the use position, rendering the use effect of the target virtual resource in the virtual scene.

The game server 102 is used to send a graphical user interface to the terminal.

The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.

The present embodiment will be described from the perspective of a virtual resource usage control device, which may be specifically integrated in a terminal device, where the terminal device may include a smart phone, a notebook computer, a tablet computer, a personal computer, and other devices.

The method for controlling the use of virtual resources provided in the embodiment of the present application may be executed by a processor of a terminal, as shown in fig. 2, a specific flow of the method for controlling the use of virtual resources mainly includes steps 201 to 204, which are described in detail as follows:

step 201, in response to the setting operation of the trigger condition, determining the trigger condition that the target virtual resource generates the use effect in the virtual scene, where the trigger condition is preset interaction information of the target virtual resource and the virtual scene.

In the embodiment of the application, the terminal device provides a graphical user interface in advance, a virtual scene and a virtual object located in the virtual scene are displayed through the graphical user interface, and the virtual object is configured to respond to touch operation aiming at the graphical user interface to execute game behaviors.

In this embodiment, the gui is a game picture displayed on a display screen of the terminal after the terminal executes a game application, and a virtual scene of the gui may have a game item and/or a plurality of virtual objects (buildings, trees, mountains, etc.) included in the game world environment. The placement positions of the buildings, mountains, walls and other virtual objects in the virtual scene form the spatial layout of the virtual scene. Further, the game to which the game application corresponds may be a first person shooter game, a multiplayer online role-playing game, or the like. For example, as shown in the schematic diagram of the graphical user interface shown in fig. 3, the obstacle 306 composed of 4 virtual containers and the obstacle 307 composed of 5 containers may be included, and a movement control 301 for controlling the movement of the virtual object, a resource control 305 for triggering the launching operation of the target virtual resource, an attack control 303 for controlling the virtual object to attack, and other skill controls 304 may also be included.

In embodiments of the present application, the virtual object may be a game character that a player operates through a game application. For example, the virtual object may be a virtual character (such as a simulated character or an animated character), a virtual animal, and so forth. The game behavior of the virtual object in the first virtual scene includes but is not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking, shooting, attacking, launching, releasing skills.

In the embodiment of the application, in order to facilitate a player to control a virtual object to remotely attack an enemy in a remote place, virtual resources can be set in a game, the virtual resources can include properties and skills, the virtual resources can be resources needing to be launched, such as bundled bombs, flash bombs, smoke bombs and the like, the player can control the virtual object to launch the flash bombs to a certain place in a visual field range, so that all other players in the range selected by the player are blank in visual field, the virtual scene cannot be viewed in a graphical user interface, evasive behaviors cannot be made against attacks, and more players can be rapidly defeated. The virtual resources can be directly released by the virtual objects or can be released by the virtual vehicles.

In the embodiment of the application, a target virtual resource is assembled by controlling a virtual object in response to a usage trigger operation on the target virtual resource, the usage trigger operation on the target virtual resource is an operation that needs to be performed when the virtual object wants to use the target virtual resource to affect an enemy in a virtual scene, and the usage trigger operations on different virtual resources may be the same or different. The usage trigger operation may be a click, long press, and/or double click, among other operations.

In the embodiment of the application, the graphical user interface may include a resource trigger control, and when a player performs a touch operation on the resource trigger control, the use trigger operation of the target virtual resource may be triggered. In addition, different virtual resources may correspond to the same resource trigger control, and may also correspond to different resource trigger controls.

In this embodiment of the application, the virtual scene includes a scene element, for example, the scene element may be a wall, a virtual ground, and the like in the virtual scene, after the target virtual resource is placed in the virtual scene, the target virtual resource may collide with the scene element, at this time, the preset interaction information may include the number of collisions, and the "determining, in response to the setting operation on the trigger condition, the trigger condition that the target virtual resource generates the usage effect in the virtual scene" in step 202 may be:

responding to the setting operation of the trigger condition, and determining the preset collision times of the target virtual resource which needs to collide with the scene element before generating the use effect in the virtual scene;

and taking the preset collision times as a trigger condition for generating a use effect of the target virtual resource in the virtual scene.

In this embodiment of the present application, a collision setting control may be set on the graphical user interface, and the preset collision number is determined through the collision setting control, specifically, the step "determining, in response to the setting operation of the trigger condition, the preset collision number that the target virtual resource needs to collide with the scene element before generating the use effect in the virtual scene" may be:

acquiring a first corresponding relation between the touch times of the collision setting control and a preset collision time;

and responding to the touch times of the collision setting control, and determining the preset collision times of the target virtual resources, which need to collide with the scene elements, before the target virtual resources generate the use effect in the virtual scene according to the first corresponding relation.

In this embodiment of the application, the first corresponding relationship may be that when the number of times of touch of the player on the collision setting control is zero, the corresponding preset number of times of collision is 1, and when the number of times of touch of the player on the collision setting control is 1, the corresponding preset number of times of collision is 2. The touch times and the preset collision times are not limited. In order to avoid too long use time of the target virtual resource, the maximum preset collision frequency can be set to be 2 times, at this time, when the player touches the collision setting control again, the corresponding preset collision frequency is changed to 1 time again, and so on in this cycle. The first corresponding relationship may also be that the number of times of touch is several, and the preset number of times of collision is the same number of times. The first corresponding relation is not limited and can be flexibly set according to actual conditions. For example, as shown in fig. 4, after the virtual object assembles the target virtual resource, a collision setting control 401 is displayed on the graphical user interface, and at this time, an attack control for controlling the virtual object to attack may be changed into a launch control 402 for controlling the target virtual resource to launch.

In an embodiment of the application, the preset collision frequency may also be determined according to the touch time of the player on the collision setting control, and the longer the touch time is, the more the preset collision frequency is.

In an embodiment of the application, an input control for inputting the preset collision times can be provided on the graphical user interface, and a player can directly input the preset collision times to be set in the input control without setting through the collision setting control.

In this embodiment of the present application, the preset interaction information may further include a flight distance, and at this time, the "triggering condition for determining that the target virtual resource generates the usage effect in the virtual scene in response to the setting operation of the triggering condition" in step 202 may be:

responding to the setting operation of the flying distance, and determining a preset distance which is required to fly in the virtual scene before the target virtual resource generates a use effect in the virtual scene;

and taking the preset distance as a trigger condition for generating a use effect of the target virtual resource in the virtual scene.

In this embodiment of the application, the preset distance may be a distance that the target virtual resource flies in the virtual scene along a flight trajectory, and the flight trajectory may be a straight line or a curve.

In the embodiment of the application, in order to enable a player to control the operated virtual object to attack other enemy virtual objects, an attack control can be arranged in the graphical user interface and used for indicating the virtual object to launch an attack in the virtual scene. When the target virtual resource is determined to be launched, the target virtual resource can be used as an attack resource, the attack control can be changed into a position launching control of the target virtual resource, and at the moment, the target virtual resource is launched in response to touch operation on the position launching control.

Step 202, responding to the launching operation of the target virtual resource, and determining the actual interaction information of the target virtual resource and the virtual scene.

In this embodiment of the present application, the graphical user interface may include a launch control of the target virtual resource, and the target virtual resource is determined to be launched in response to a touch operation on the launch control.

In this embodiment of the application, when the trigger condition is a preset number of collisions between the target virtual resource and the scene element, the actual interaction information may be an actual number of collisions between the target virtual resource and the scene element, and the step 203 of determining the actual interaction information between the target virtual resource and the virtual scene in response to the releasing operation of the target virtual resource may be:

responding to the releasing operation of the target virtual resource, and determining the actual collision frequency of the target virtual resource colliding with the scene element in the releasing process;

and taking the actual collision times as actual interactive information of the target virtual resources and the virtual scene.

In the embodiment of the application, in order to more conveniently determine the actual number of collisions, collision points generated between the target virtual resource and the scene element in the release process may be determined first, and the actual number of collisions is determined according to the number of the collision points. In order to enable the player to more intuitively see the actual number of collisions, the method may further include, before the step "determining the actual number of collisions that the target virtual resource collides with the scene element during the release process in response to the release operation on the target virtual resource", displaying, on a graphical user interface, an identification of a collision point generated at each collision:

displaying first interaction track prompt information of the target virtual resource and the virtual scene on a graphical user interface, wherein the first interaction track prompt information comprises a first collision point identifier of collision of the target virtual resource and the scene element;

and determining the actual collision frequency of the target virtual resource colliding with the scene element in the releasing process according to the first collision point identifier.

In this embodiment of the application, when the first collision point identifier is displayed, the player may adjust actual interaction information of the target virtual resource in the virtual scene according to the position of the displayed first collision point identifier in the virtual scene, that is, adjust the flight trajectory of the target virtual resource in the virtual scene, so that the position of the collision point between the target virtual resource and the scene element changes, and further the use position of the target virtual resource changes, thereby better acting on the enemy law. In addition, the player can also adjust the preset collision times of the target virtual resources in the virtual scene, so that the use positions of the target virtual resources are changed, and the enemy law is better acted.

In this embodiment of the application, the first interaction trajectory prompt message may be displayed in a virtual scene, and the first interaction trajectory prompt message may be a flight trajectory of the target virtual resource in the virtual scene. The display mode of the first interaction track prompt message is not limited and can be flexibly set according to the actual situation. For example, as shown in fig. 5, the first interaction trajectory prompt message may be a straight line 501, when the target virtual resource flies along the first interaction trajectory prompt message during the delivery process, and when the target virtual resource collides with the obstacle 306, a collision point is generated, and a first collision point identifier 502 is displayed in the first interaction trajectory prompt message 501.

In this embodiment of the present application, the graphical user interface includes a scene view control, where the scene view control is configured to display a top view of the entire virtual scene, and may display, at the scene view control, a collision point identifier generated when the scene view control collides each time, specifically, before "responding to a release operation on a target virtual resource and determining an actual number of collisions between the target virtual resource and a scene element in a release process" in the above steps, the method may further include:

displaying second interaction track prompt information of the target virtual resource and the virtual scene in the scene viewing control, wherein the second interaction track prompt information comprises a second collision point identifier of the target virtual resource colliding with the scene element and a use position identifier of the target virtual resource;

and adjusting the use position identifier in response to the adjustment operation of the trigger condition.

In this embodiment of the application, the second interaction trajectory prompt message is displayed in the scene viewing control, and since the scene viewing control is a top view of the virtual scene, the second interaction trajectory prompt message may be a top view of the first interaction trajectory prompt message in the virtual scene. For example, as shown in fig. 5, a scene viewing control 503 is disposed at the upper left corner of the graphical user interface, the scene viewing control 503 includes second interaction track prompt information 504, and the second interaction track prompt information 504 includes a second collision point identifier 505 and a use position identifier 506.

In the embodiment of the application, the display modes of the first collision point identifier and the second collision point identifier are not limited and can be flexibly set according to actual conditions, and the display modes of the first collision point identifier and the second collision point identifier can be the same or different.

In this embodiment of the application, the use position identifier in the second interaction trajectory prompt message is used to indicate a use position of the target virtual resource in the virtual scene, and the position of the use position identifier in the second interaction trajectory prompt message may be adjusted, so as to adjust the use position of the target virtual resource in the virtual scene, specifically, before the step "rendering the use effect of the target virtual resource in the virtual scene according to the use position", the method may include:

responding to the touch operation of the scene viewing control, and amplifying the prompt information of the second interaction track;

changing the position of the using position identifier in the second interaction track prompt message in response to the moving operation of the using position identifier;

and determining the corresponding position of the changed position of the using position identifier in the virtual scene as the updated using position.

After determining the updated usage location, step 204 "render the usage effect of the target virtual resource in the virtual scene according to the usage location" may be: and rendering the use effect of the target virtual resource in the virtual scene according to the updated use position.

In this embodiment of the application, the graphical user interface may include a mobile control using the location identifier, and when the mobile control is subjected to a touch operation, a location of the location identifier in the second interaction trajectory prompt information may be changed. In addition, the use position identifier can be dragged on the graphical user interface, so that the position of the use position identifier in the second interaction track prompt message is changed.

In this embodiment of the application, when the actual interaction information is the flight distance, the step 203 of "determining the actual interaction information of the target virtual resource and the virtual scene in response to the launching operation of the target virtual resource" may be:

responding to the launching operation of the target virtual resource, and determining the actual flying distance of the target virtual resource in the launching process;

and taking the actual distance as actual interaction information of the target virtual resource and the virtual scene.

And 203, if the actual interaction information meets the trigger condition, determining the use position of the target virtual resource in the virtual scene.

In this embodiment of the application, when the actual interaction information is the actual number of collisions, the step 204 of "determining the use position of the target virtual resource in the virtual scene if the actual interaction information satisfies the trigger condition" may be:

if the actual collision frequency is larger than or equal to the preset collision frequency, determining that the position of the target virtual resource is a candidate use position when the collision frequency of the target virtual resource and the scene element reaches the preset collision frequency in the releasing process of the target virtual resource;

and determining the position of the target virtual resource in the target preset time after the candidate use position in the release process, wherein the position is the use position of the target virtual resource in the virtual scene.

In this embodiment of the present application, a preset time required by the target virtual resource from the candidate use position to the use position may be determined according to a preset number of times of collision preset by the player, where, in the foregoing step, "determining a position of the target virtual resource within the preset target time after the candidate use position in the release process, before the use position of the target virtual resource in the virtual scene", the method further includes:

acquiring a second corresponding relation between the preset collision times and preset time after the target virtual resource reaches the candidate use position in the releasing process, wherein the preset time is used for indicating the time from the candidate use position to the use position of the target virtual resource in the releasing process;

and determining the preset collision times of the target virtual resources according to the second corresponding relation, and using the preset time from the candidate use position to the use position as the target preset time.

In this embodiment of the application, the second corresponding relationship may be that when the preset collision frequency is 0, the preset time is m seconds, and when the preset collision frequency is 1, the preset time is s seconds, and the preset times corresponding to different preset collision frequencies may be the same or different.

In this embodiment of the application, the target virtual resource reaches the use position from the candidate use position after the target preset time, the preset collision times of the target virtual resource may not have the second correspondence with the target preset time, and the target preset times corresponding to the target virtual resources with different preset collision times may be the same. For example, the target virtual resource reaches the candidate use position after t seconds, that is, the target virtual resource reaches the candidate use position after t seconds, and the use effect is generated after t seconds.

In this embodiment of the application, when the actual interaction information is the actual flight distance, the step 204 "determining the use position of the target virtual resource in the virtual scene if the actual interaction information satisfies the trigger condition" may be:

and if the actual distance is greater than or equal to the preset distance, determining that the position of the target virtual resource is the use position when the flight distance of the target virtual resource reaches the preset distance in the process of putting the target virtual resource.

In this embodiment, when the flying distance of the target virtual resource reaches the preset distance, the position where the target virtual resource is located at this time may be used as a candidate using position, and the position reached by the candidate using position after the preset time is used as a using position.

In the embodiment of the application, when the actual collision frequency is less than the preset collision frequency, or the actual distance is less than the preset distance, that is, when the actual interaction information does not meet the trigger condition, the use effect of the target virtual resource is not rendered in the virtual scene.

And step 204, rendering the use effect of the target virtual resource in the virtual scene according to the use position.

In the embodiment of the application, after the use position is determined, the use effect of the target virtual resource is rendered at the use position of the virtual scene. The usage effect may be set according to a specific category of the target virtual resource, for example, when the target virtual resource is a flash bomb, the usage effect may be to blank at least a part of the virtual scene of the screen.

In the embodiment of the application, after the player triggers the putting operation of the target virtual resource, the use of the target virtual resource in the virtual scene can be cancelled. In this case, the specific method may be: responding to the launching operation of the target virtual resource, and displaying a prop canceling area; and in response to the triggering operation of the prop canceling area, not rendering the use effect of the target virtual resource in the virtual scene.

In the embodiment of the application, the display position and the display shape of the prop canceling area in the graphical user interface for displaying the second virtual scene can be unlimited and can be flexibly set according to the actual situation.

All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.

According to the virtual resource use control method provided by the embodiment of the application, when the player puts the target virtual resource, the trigger condition which needs to be met when the target virtual resource generates the use effect in the virtual scene can be preset, and no matter when the player puts the target virtual resource at any time, the player can render the use effect of the target virtual resource at the use position which meets the trigger condition in the virtual scene only when the actual interaction information of the target virtual resource and the virtual scene meets the trigger condition. The method has no higher requirement on the time for the player to put in the virtual resources, can avoid the player from accidentally injuring the player or being incapable of injuring the enemy because of improper putting time of the virtual resources, and improves the practicability and the usability of the virtual resources.

In order to better implement the method for controlling the use of virtual resources according to the embodiments of the present application, an embodiment of the present application further provides a device for controlling the use of virtual resources, where a terminal device provides a graphical user interface, the graphical user interface includes a virtual scene and a virtual object located in the virtual scene, and the virtual object is configured to respond to a touch operation on the graphical user interface to execute a game behavior. Referring to fig. 6, fig. 6 is a schematic structural diagram of a device for controlling virtual resource usage according to an embodiment of the present disclosure. The usage control apparatus of virtual resources may include a first determination unit 601, a second determination unit 602, a third determination unit 603, and a rendering unit 604.

The first determining unit 601 is configured to determine, in response to a setting operation on a trigger condition, the trigger condition that a target virtual resource generates a use effect in a virtual scene, where the trigger condition is preset interaction information of the target virtual resource and the virtual scene;

a second determining unit 602, configured to determine, in response to a launching operation on a target virtual resource, actual interaction information of the target virtual resource and a virtual scene;

a third determining unit 603, configured to determine a use position of the target virtual resource in the virtual scene if the actual interaction information meets the trigger condition;

a rendering unit 604, configured to render the usage effect of the target virtual resource in the virtual scene according to the usage position.

Optionally, the preset interaction information includes a collision frequency, the virtual scene includes a scene element, and the first determining unit 601 is further configured to:

responding to the setting operation of the trigger condition, and determining the preset collision times of the target virtual resource which needs to collide with the scene element before generating the use effect in the virtual scene;

and taking the preset collision times as a trigger condition for generating a use effect of the target virtual resource in the virtual scene.

Optionally, the graphical user interface includes a collision setting control, and the first determining unit 601 is further configured to:

acquiring a first corresponding relation between the touch times of the collision setting control and a preset collision time;

and responding to the touch times of the collision setting control, and determining the preset collision times of the target virtual resources, which need to collide with the scene elements, before the target virtual resources generate the use effect in the virtual scene according to the first corresponding relation.

Optionally, the second determining unit 602 is further configured to:

responding to the releasing operation of the target virtual resource, and determining the actual collision frequency of the target virtual resource colliding with the scene element in the releasing process;

and taking the actual collision times as actual interactive information of the target virtual resources and the virtual scene.

Optionally, the third determining unit 603 is further configured to:

if the actual collision frequency is larger than or equal to the preset collision frequency, determining that the position of the target virtual resource is a candidate use position when the collision frequency of the target virtual resource and the scene element reaches the preset collision frequency in the releasing process of the target virtual resource;

and determining the position of the target virtual resource in the target preset time after the candidate use position in the release process, wherein the position is the use position of the target virtual resource in the virtual scene.

Optionally, the third determining unit 603 is further configured to:

acquiring a second corresponding relation between the preset collision times and preset time after the target virtual resource reaches the candidate use position in the releasing process, wherein the preset time is used for indicating the time from the candidate use position to the use position of the target virtual resource in the releasing process;

and determining the preset collision times of the target virtual resources according to the second corresponding relation, and using the preset time from the candidate use position to the use position as the target preset time.

Optionally, the second determining unit 602 is further configured to:

displaying first interaction track prompt information of the target virtual resource and the virtual scene on a graphical user interface, wherein the first interaction track prompt information comprises a first collision point identifier of collision of the target virtual resource and the scene element;

and determining the actual collision frequency of the target virtual resource colliding with the scene element in the releasing process according to the first collision point identifier.

Optionally, the graphical user interface includes a scene viewing control, and the second determining unit 602 is further configured to:

displaying second interaction track prompt information of the target virtual resource and the virtual scene in the scene viewing control, wherein the second interaction track prompt information comprises a second collision point identifier of the target virtual resource colliding with the scene element and a use position identifier of the target virtual resource;

and adjusting the use position identifier in response to the adjustment operation of the trigger condition.

Optionally, the rendering unit 604 is further configured to:

responding to the touch operation of the scene viewing control, and amplifying the prompt information of the second interaction track;

changing the position of the using position identifier in the second interaction track prompt message in response to the moving operation of the using position identifier;

determining the corresponding position of the position after the use position mark is changed in the virtual scene, wherein the position is the updated use position;

and rendering the use effect of the target virtual resource in the virtual scene according to the updated use position.

Optionally, the preset interaction information includes a flight distance, and the first determining unit 601 is further configured to:

responding to the setting operation of the flying distance, and determining a preset distance which is required to fly in the virtual scene before the target virtual resource generates a use effect in the virtual scene;

and taking the preset distance as a trigger condition for generating a use effect of the target virtual resource in the virtual scene.

Optionally, the second determining unit 602 is further configured to:

responding to the launching operation of the target virtual resource, and determining the actual flying distance of the target virtual resource in the launching process;

and taking the actual distance as actual interaction information of the target virtual resource and the virtual scene.

Optionally, the third determining unit 603 is further configured to:

and if the actual distance is greater than or equal to the preset distance, determining that the position of the target virtual resource is the use position when the flight distance of the target virtual resource reaches the preset distance in the process of putting the target virtual resource.

All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.

According to the virtual resource use control device provided by the embodiment of the application, when a player puts in a target virtual resource, a trigger condition which needs to be met when the target virtual resource generates a use effect in a virtual scene can be preset, and no matter when the player puts in the target virtual resource at any time, the player can only render the use effect of the target virtual resource at a use position which meets the trigger condition in the virtual scene when the actual interaction information of the target virtual resource and the virtual scene meets the trigger condition. The method has no higher requirement on the time for the player to put in the virtual resources, can avoid the player from accidentally injuring the player or being incapable of injuring the enemy because of improper putting time of the virtual resources, and improves the practicability and the usability of the virtual resources.

Correspondingly, the embodiment of the application also provides a computer device, which can be a terminal, and the terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game machine, a personal computer, a personal digital assistant and the like. As shown in fig. 7, fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 700 includes a processor 701 having one or more processing cores, a memory 702 having one or more computer-readable storage media, and a computer program stored on the memory 702 and executable on the processor. The processor 701 is electrically connected to the memory 702. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.

The processor 701 is a control center of the computer device 700, connects various parts of the entire computer device 700 using various interfaces and lines, performs various functions of the computer device 700 and processes data by running or loading software programs and/or modules stored in the memory 702, and calling data stored in the memory 702, thereby monitoring the computer device 700 as a whole.

In the embodiment of the present application, the processor 701 in the computer device 700 loads instructions corresponding to processes of one or more application programs into the memory 702, and the processor 701 executes the application program stored in the memory 702, so as to implement various functions as follows:

responding to the setting operation of the trigger condition, determining the trigger condition that the target virtual resource generates the use effect in the virtual scene, wherein the trigger condition is preset interaction information of the target virtual resource and the virtual scene; responding to the putting operation of the target virtual resource, and determining the actual interaction information of the target virtual resource and the virtual scene; if the actual interaction information meets the triggering condition, determining the use position of the target virtual resource in the virtual scene; and according to the use position, rendering the use effect of the target virtual resource in the virtual scene.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Optionally, as shown in fig. 7, the computer device 700 further includes: a touch display screen 703, a radio frequency circuit 704, an audio circuit 705, an input unit 706, and a power supply 707. The processor 701 is electrically connected to the touch display screen 703, the radio frequency circuit 704, the audio circuit 705, the input unit 706, and the power source 707. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 7 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.

The touch display screen 703 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 703 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 701, and can receive and execute commands sent by the processor 701. The touch panel may cover the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 701 to determine the type of the touch event, and then the processor 701 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 703 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 703 can also be used as a part of the input unit 706 to implement an input function.

The radio frequency circuit 704 may be used for transceiving radio frequency signals to establish wireless communication with a network device or other computer device through wireless communication, and for transceiving signals with the network device or other computer device.

Audio circuitry 705 may be used to provide an audio interface between a user and a computer device through speakers and microphones. The audio circuit 705 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 705 and converted into audio data, which is then processed by the output processor 701 and transmitted to, for example, another computer device via the radio frequency circuit 704, or output to the memory 702 for further processing. The audio circuit 705 may also include an earbud jack to provide communication of a peripheral headset with the computer device.

The input unit 706 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.

The power supply 707 is used to power the various components of the computer device 700. Optionally, the power supply 707 may be logically connected to the processor 701 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 707 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.

Although not shown in fig. 7, the computer device 700 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.

In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

As can be seen from the above, in the computer device provided in this embodiment, when a player puts in a target virtual resource, a trigger condition that needs to be met when the target virtual resource generates a use effect in a virtual scene may be preset, and no matter at what time the player puts in the target virtual resource, only when actual interaction information of the target virtual resource and the virtual scene meets the trigger condition, the use effect of the target virtual resource is rendered at a use position in the virtual scene that meets the trigger condition. The method has no higher requirement on the time for the player to put in the virtual resources, can avoid the player from accidentally injuring the player or being incapable of injuring the enemy because of improper putting time of the virtual resources, and improves the practicability and the usability of the virtual resources.

It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.

To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any one of the methods for controlling the use of virtual resources provided by the embodiments of the present application. For example, the computer program may perform the steps of:

responding to the setting operation of the trigger condition, determining the trigger condition that the target virtual resource generates the use effect in the virtual scene, wherein the trigger condition is preset interaction information of the target virtual resource and the virtual scene; responding to the putting operation of the target virtual resource, and determining the actual interaction information of the target virtual resource and the virtual scene; if the actual interaction information meets the triggering condition, determining the use position of the target virtual resource in the virtual scene; and according to the use position, rendering the use effect of the target virtual resource in the virtual scene.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.

Since the computer program stored in the storage medium can execute the steps in any of the methods for controlling the use of virtual resources provided in the embodiments of the present application, the beneficial effects that can be achieved by any of the methods for controlling the use of virtual resources provided in the embodiments of the present application can be achieved, for details, see the foregoing embodiments, and are not described herein again.

In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

The method, the apparatus, the computer device, and the storage medium for controlling the use of virtual resources provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principle and the implementation manner of the present invention, and the description of the embodiments above is only used to help understanding the technical solution and the core idea of the present invention; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:游戏中虚拟物品处理方法、装置、终端和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类