Virtual object control method, device, equipment and storage medium

文档序号:1148898 发布日期:2020-09-15 浏览:6次 中文

阅读说明:本技术 虚拟对象的控制方法、装置、设备及存储介质 (Virtual object control method, device, equipment and storage medium ) 是由 杨金昊 林凌云 于 2020-07-02 设计创作,主要内容包括:本申请公开了一种虚拟对象的控制方法、装置、设备及存储介质,属于人机交互领域。该方法包括:显示第一用户界面,第一用户界面中显示有共享虚拟道具,所述共享虚拟道具是第二虚拟对象放置的虚拟道具;响应于第一虚拟对象位于共享虚拟道具的周侧范围且第二虚拟对象与第一虚拟对象具有队友关系,显示第二用户界面,第二用户界面包括共享虚拟道具的操作控件,操作控件用于操作共享虚拟道具;响应于接收到操作控件上的触发操作,控制第一虚拟对象使用共享虚拟道具。简化了虚拟对象之间关于虚拟道具的交互方式,使得同一队伍的虚拟对象使用虚拟道具的方式更加灵活。(The application discloses a control method, a control device, control equipment and a storage medium of a virtual object, and belongs to the field of human-computer interaction. The method comprises the following steps: displaying a first user interface, wherein a shared virtual prop is displayed in the first user interface, and the shared virtual prop is a virtual prop placed by a second virtual object; responding to the situation that the first virtual object is located in the range of the peripheral side of the shared virtual prop and the second virtual object and the first virtual object have a teammate relationship, and displaying a second user interface, wherein the second user interface comprises an operation control for sharing the virtual prop, and the operation control is used for operating the shared virtual prop; and controlling the first virtual object to use the shared virtual prop in response to receiving the triggering operation on the operation control. The interaction mode of the virtual objects relative to the virtual props is simplified, so that the mode that the virtual objects of the same team use the virtual props is more flexible.)

1. A method for controlling a virtual object, the method comprising:

displaying a first user interface, wherein a shared virtual prop is displayed in the first user interface, and the shared virtual prop is a virtual prop placed by a second virtual object;

responding to the first virtual object located in the range of the peripheral side of the shared virtual prop and the second virtual object and the first virtual object having a teammate relationship, and displaying a second user interface, wherein the second user interface comprises an operation control of the shared virtual prop, and the operation control is used for operating the shared virtual prop;

and controlling the first virtual object to use the shared virtual prop in response to receiving the triggering operation on the operation control.

2. The method of claim 1, wherein the shared virtual item corresponds to a detection area;

the displaying a second user interface in response to the first virtual object being located in a range of the periphery of the shared virtual item and the second virtual object having a teammate relationship with the first virtual object, comprising:

detecting whether the first virtual object and the second virtual object have the teammate relationship in response to the first virtual object being located within the detection area;

displaying the second user interface in response to the first virtual object having the teammate relationship with the second virtual object.

3. The method of claim 2, wherein the shared virtual item corresponds to a crash box, the crash box corresponding to the detection area;

said detecting whether said first virtual object and said second virtual object have said teammate relationship in response to said first virtual object being located within said detection area comprises:

generating first trigger information in response to a three-dimensional model of the first virtual object interacting with a trigger of the crash box, the first trigger information including entry of the first virtual object into the detection area;

determining the identifier of the first virtual object according to the first trigger information;

determining that the first virtual object and the second virtual object have the teammate relationship in response to the identification of the first virtual object and the identification of the second virtual object belonging to the same team.

4. A method according to any one of claims 1 to 3, wherein the shared virtual item corresponds to a detection zone;

the responding to the first virtual object being located in the range of the peripheral side of the shared virtual prop, before displaying the second user interface, the method includes:

in response to the second virtual object not being within the detection area, switching the state of the shared virtual item to an unoperated state.

5. The method of claim 4, wherein the shared virtual item corresponds to a crash box, the crash box corresponding to the detection area;

the switching the state of the shared virtual item to an unoperated state in response to the second virtual object not being within the detection area comprises:

generating second trigger information in response to the three-dimensional model of the second virtual object interacting with a trigger of the crash box, the second trigger information including the second virtual object exiting the detection zone;

and determining that the shared virtual prop is in the unoperated state according to the second trigger information.

6. The method of any of claims 1 to 3, further comprising:

displaying a third user interface, wherein the third user interface comprises a second virtual environment picture and a placement control of the shared virtual prop, and the second virtual environment picture is obtained by observing a visual angle of the second virtual object;

in response to receiving a placement operation on the placement control, acquiring a pre-placement position of the shared virtual prop in the virtual environment;

and responding to the preset position meeting the setting condition, and displaying a fourth user interface, wherein the fourth user interface comprises the set shared virtual prop and a control identification corresponding to the shared virtual prop.

7. The method of claim 6, wherein the placement condition comprises at least one of:

the terrain in the virtual environment belongs to flat terrain;

the area occupied by the shared virtual prop is smaller than the area of the placement position;

and no virtual environment element is arranged in the placement area corresponding to the shared virtual prop.

8. The method of claim 6, wherein the method comprises:

and in response to the shared virtual prop not meeting the placement condition, displaying a fifth user interface, wherein the fifth user interface comprises prompt information for prompting that the shared virtual prop cannot be placed.

9. The method of any of claims 1 to 3, further comprising:

in response to the placement time of the shared virtual item exceeding a time threshold, switching the shared virtual item to a disabled state;

or the like, or, alternatively,

in response to the shared virtual item receiving a failure operation, switching the shared virtual item to a failure state, the failure operation being generated by the second virtual object;

or the like, or, alternatively,

and in response to the usage value of the shared virtual prop being reduced to zero, switching the shared virtual prop to a failure state, wherein the usage value is used for representing the duration of the shared virtual prop in the virtual environment.

10. The method of any of claims 1 to 3, further comprising:

in response to a third virtual object being located in a peripheral side range of the shared virtual item and the third virtual object having an enemy relationship with the second virtual object, controlling the shared virtual item to reduce a life value of the third virtual object;

in response to the life value of the third virtual object decreasing to zero, converting the virtual item of the third virtual object equipment to a shared virtual item corresponding to the virtual item.

11. An apparatus for controlling a virtual object, the apparatus comprising:

the display module is used for displaying a first user interface, wherein shared virtual props are displayed in the first user interface, and the shared virtual props are virtual props placed by second virtual objects;

the display module is configured to display a second user interface in response to that the first virtual object is located in a range around the shared virtual item and that the second virtual object has a teammate relationship with the first virtual object, where the second user interface includes an operation control of the shared virtual item, and the operation control is used to operate the shared virtual item;

and the control module is used for responding to the received trigger operation on the operation control and controlling the first virtual object to use the shared virtual prop.

12. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which instruction, program, set of codes, or set of instructions, is loaded and executed by said processor to implement a method of controlling a virtual object according to any one of claims 1 to 10.

13. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of controlling a virtual object according to any one of claims 1 to 10.

Technical Field

The present application relates to the field of human-computer interaction, and in particular, to a method, an apparatus, a device, and a storage medium for controlling a virtual object.

Background

In an application program based on a three-dimensional virtual environment, such as a first-person shooter-type game, a user can control a virtual object to use a virtual prop in the virtual environment, wherein the virtual prop is a virtual prop equipped with the virtual object.

In general, a plurality of virtual objects exist in a virtual environment for fighting, and various types of virtual items are scattered around the virtual environment for the virtual objects to pick up. Illustratively, a virtual item is a virtual item that is discarded by a first virtual object in a virtual environment, and after a second virtual object picks up the virtual item, the ownership of the virtual item belongs to the second virtual object. For example, virtual object a discards virtual item a, which is picked up by virtual object b, at this time, the ownership of virtual item a belongs to virtual object b, and virtual item b can use virtual item a.

In the above technical solution, if the second virtual object is not equipped with the virtual item to be used, and the first virtual object is equipped with the virtual item, the virtual item needs to be discarded by the first virtual object, the second virtual object can be used by picking up the virtual item, and the interaction manner between the virtual objects with respect to the virtual item is complicated.

Disclosure of Invention

The embodiment of the application provides a method, a device, equipment and a storage medium for controlling a virtual object, and simplifies the interaction mode of the virtual object with respect to a virtual item. The technical scheme is as follows:

according to an aspect of the present application, there is provided a method of controlling a virtual object, the method including:

displaying a first user interface, wherein a shared virtual prop is displayed in the first user interface, and the shared virtual prop is a virtual prop placed by a second virtual object;

responding to the first virtual object located in the range of the peripheral side of the shared virtual prop and the second virtual object and the first virtual object having a teammate relationship, and displaying a second user interface, wherein the second user interface comprises an operation control of the shared virtual prop, and the operation control is used for operating the shared virtual prop;

and controlling the first virtual object to use the shared virtual prop in response to receiving the triggering operation on the operation control.

According to another aspect of the present application, there is provided an apparatus for controlling a virtual object, the apparatus including:

the display module is used for displaying a first user interface, wherein shared virtual props are displayed in the first user interface, and the shared virtual props are virtual props placed by second virtual objects;

the display module is configured to display a second user interface in response to that the first virtual object is located in a range around the shared virtual item and that the second virtual object has a teammate relationship with the first virtual object, where the second user interface includes an operation control of the shared virtual item, and the operation control is used to operate the shared virtual item;

and the control module is used for responding to the received trigger operation on the operation control and controlling the first virtual object to use the shared virtual prop.

According to another aspect of the present application, there is provided a computer device comprising: a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method of controlling a virtual object as described above.

According to another aspect of the present application, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by a processor to implement the method of controlling a virtual object as described above.

According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method of controlling a virtual object as described in the above aspect.

The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:

when the first virtual object belonging to the same team as the second virtual object is located in the peripheral side range of the shared virtual item, the user can control the first virtual object to use the shared virtual item through the operation control on the second user interface. The shared virtual prop is not required to be equipped by the first virtual object, the shared virtual prop is not required to be discarded to the virtual environment by the second virtual object, the first virtual object can also use the shared virtual prop, the interaction mode of the virtual props among the virtual objects is simplified, and the mode that the virtual objects of the same team use the virtual props is more flexible.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

FIG. 1 is a schematic illustration of virtual object detection provided by an exemplary embodiment of the present application;

FIG. 2 is a block diagram of a computer system provided in an exemplary embodiment of the present application;

FIG. 3 is a schematic diagram of a state synchronization technique provided by an exemplary embodiment of the present application;

FIG. 4 is a schematic diagram of a frame synchronization technique provided by an exemplary embodiment of the present application;

FIG. 5 is a flowchart of a method for controlling a virtual object provided by an exemplary embodiment of the present application;

FIG. 6 is a schematic view of a camera model corresponding to a perspective of a virtual object provided by an exemplary embodiment of the present application;

FIG. 7 is a flowchart of a method for controlling a virtual object according to another exemplary embodiment of the present application;

FIG. 8 is a schematic view of a user interface provided by an exemplary embodiment of the present application;

FIG. 9 is a schematic illustration of crash box detection provided by an exemplary embodiment of the present application;

FIG. 10 is a schematic view of a user interface provided by another exemplary embodiment of the present application;

FIG. 11 is a schematic view of a user interface provided by another exemplary embodiment of the present application;

FIG. 12 is a flowchart of a method for controlling a virtual object according to another exemplary embodiment of the present application;

FIG. 13 is a schematic view of a user interface provided by another exemplary embodiment of the present application;

FIG. 14 is a schematic view of a user interface provided by another exemplary embodiment of the present application;

FIG. 15 is a flow chart of a method for controlling a game based virtual object provided by an exemplary embodiment of the present application;

FIG. 16 is a schematic view of a user interface provided by another exemplary embodiment of the present application;

FIG. 17 is a schematic view of a user interface provided by another exemplary embodiment of the present application;

FIG. 18 is a block diagram of a control apparatus for a virtual object provided in an exemplary embodiment of the present application;

fig. 19 is a schematic device structure diagram of a computer apparatus according to an exemplary embodiment of the present application.

Detailed Description

To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

First, terms referred to in the embodiments of the present application are described:

virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.

Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment. Virtual objects broadly refer to one or more virtual objects in a virtual environment.

Sharing the virtual prop: the term "virtual item" refers to a virtual item that can be used in common by virtual objects belonging to the same team or the same team in a virtual environment, the ownership of the virtual item belongs to a second virtual object in which the virtual item is placed, the second virtual object can perform any processing operation (for example, a destruction operation or a replacement placement position operation) on the shared virtual item, and a first virtual object belonging to the same team as the virtual object only has a right of use.

Shield turret: the device is a virtual prop which can defend the attack of firearms in a virtual environment and has certain attacking capacity, a shield turret comprises a defending part and an attacking part, wherein the defending part comprises a bulletproof shield, the defending part is used for defending bullets of firearms and can defend other weapons (such as flash bombs) with lower damage; the attack part includes a gun turret, and the attack part is used for attacking other virtual objects in the virtual environment. A virtual object with a shield turret may place the shield turret in the virtual environment. In the embodiment of the present application, virtual objects belonging to the same team or the same camp may share one shield turret, but the virtual object on which the shield turret is placed has a right to replace the placement position of the shield turret and destroy the shield turret.

First Person shooter game (FPS): the shooting game is a shooting game that a user can play from a first-person perspective, and a screen of a virtual environment in the game is a screen that observes the virtual environment from a perspective of a first virtual object. In the game, at least two virtual objects carry out a single-game fighting mode in a virtual environment, the virtual objects achieve the purpose of survival in the virtual environment by avoiding attacks initiated by other virtual objects and dangers (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual objects in the virtual environment is zero, the life of the virtual objects in the virtual environment is ended, and finally the virtual objects which survive in the virtual environment are winners. Optionally, each client may control one or more virtual objects in the virtual environment, with the time when the first client joins the battle as a starting time and the time when the last client exits the battle as an ending time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.

The equipment in the embodiment of the application refers to the virtual props carried by the virtual objects before the virtual objects participate in the battle, and the virtual objects can use the equipped virtual props in the virtual environment. In some embodiments, when the virtual objects participate in the battle, each virtual object has its own backpack, backpack grids exist in the backpack, the virtual items owned by the virtual objects are all stored in the backpack grids, and the virtual objects place the picked virtual items in the backpack grids.

The method provided in the present application may be applied to a virtual reality application program, a three-dimensional map program, a military simulation program, a First Person shooter Game (FPS), a Multiplayer Online Battle sports Game (MOBA), and the like, and the following embodiments are exemplified by the application in Games.

The game based on the virtual environment is composed of one or more game world maps, the virtual environment in the game simulates the scene of the real world, a user can control the virtual object in the game to walk, run, jump, shoot, fight, drive, place the virtual prop, use the virtual prop, be attacked by other virtual objects, be injured in the virtual environment, attack other virtual objects and other actions in the virtual environment, the interactivity is strong, and a plurality of users can form a team on line to play a competitive game. The user controls the virtual objects to fight the virtual objects of the enemy party by using various virtual props in the virtual environment, the virtual objects belonging to the same team are matched with each other until the fighting is finished, for example, when the virtual object a is injured, the virtual object b uses a virtual prop (virtual medicine bag) of a treatment class to improve the life value (or energy value) of the virtual object a.

In the embodiment of the application, the shared virtual props are used by the virtual objects belonging to the same team, so that the interaction mode of the virtual props among the virtual objects is simplified, and the virtual objects are better matched with each other.

The method comprises the steps that a plurality of virtual objects exist in a virtual environment, wherein the virtual objects comprise a first virtual object and a second virtual object, the first virtual object and the second virtual object belong to the same team, the second virtual object is provided with a shared virtual prop, the shared virtual prop can be placed in the virtual environment by the second virtual object, the shared virtual prop is a virtual prop which can be used by the virtual objects belonging to the same team as the first virtual object, and the shared virtual prop has both a defense function and an attack function. Illustratively, the shared virtual property is a shield turret.

When a second virtual object places shared virtual item 10 in the virtual environment, the client determines whether a pre-placement position corresponding to shared virtual item 10 can accommodate shared virtual item 10, and when the pre-placement position can accommodate shared virtual item 10, the second virtual object places shared virtual item 10 at the pre-placement position. As shown in fig. 1, a corresponding square area 12 is formed on the periphery of the shared virtual item 10, and the square area 12 is used to detect whether or not a virtual object 11 is located near the shield turret, and whether or not the virtual object 11 located near the shield turret belongs to the same line (or the same bank) as the second virtual object. It should be noted that the square area 12 is only an illustration, and in practical applications, the square area 12 is hidden in the user interface, i.e. the user cannot observe the square area 12. In other embodiments, the square shaped area 12 may be any shape.

The shared virtual prop only allows one virtual object to operate, and the virtual object for placing the shared virtual prop belong to the same team (or the same camp). Illustratively, when the virtual object 11 enters the square area 12, the client determines that the virtual object 11 and the second virtual object belong to the same queue according to the identifier of the virtual object 11, and a corresponding operation control is displayed on the client corresponding to the virtual object 11. The user can operate the virtual object 11 to operate the shared virtual prop; or, the client determines that the virtual object 11 and the second virtual object do not belong to the same team according to the identifier of the virtual object 11, and does not display the corresponding operation control on the client corresponding to the virtual object 11.

In response to the time that the shared virtual item is placed in the virtual environment exceeding a time threshold, the client automatically destroys the shared virtual item; or, in response to the second virtual object executing destruction operation on the shared virtual item, the shared virtual item is destroyed by the second virtual object; or, the shared virtual item is provided with a use value, the use value is used for representing the duration of the shared virtual item in the virtual environment, and similar to the life value of the virtual object, the client destroys the shared virtual item in response to the reduction of the use value of the shared virtual item to zero.

According to the method provided by the embodiment of the application, whether the first virtual object and the second virtual object belong to the same team is detected, and when the first virtual object and the second virtual object have a team-friend relationship, a user can control the first virtual object to use the shared virtual prop which is placed in the virtual environment in advance by the second virtual object, so that the interaction mode of the first virtual object and the second virtual object with respect to the virtual prop is simplified.

Fig. 2 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 120, a server 140, and a second terminal 160.

The first terminal 120 is installed and operated with an application program supporting a virtual environment. The application program may be any one of a Virtual Reality application program, a three-dimensional map program, a military simulation program, an FPS game, an MOBA game, a multi-player gunfight type survival game, a large-fleeing and killing type shooting game, a Virtual Reality (VR) application program, and an Augmented Reality (AR) program. The first terminal 120 is a terminal used by a first user who uses the first terminal 120 to control a first virtual object located in a virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, walking, running, jumping, riding, aiming, picking up, placing virtual props, attacking other virtual objects. Illustratively, the first virtual object is a first virtual character, such as a simulated character object or an animated character object.

The first terminal 120 is connected to the server 140 through a wireless network or a wired network.

The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, the memory 142 further includes a receiving module 1421, a control module 1422 and a sending module 1423, the receiving module 1421 is configured to receive a request sent by a client, such as a matching request, the matching request is used to match a plurality of virtual objects into the same virtual environment; the control module 1422 is configured to control rendering of a virtual environment screen; the sending module 1423 is configured to send a response to the client, such as sending a prompt to the client, where the prompt prompts the user about the situation in the virtual environment. The server 140 is used to provide background services for applications that support a three-dimensional virtual environment. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.

The server 140 may employ synchronization techniques to make the visual appearance consistent among multiple clients. Illustratively, the synchronization techniques employed by the server 140 include: a state synchronization technique or a frame synchronization technique.

State synchronization techniques

In an alternative embodiment based on fig. 2, the server 140 employs a state synchronization technique to synchronize with multiple clients. In the state synchronization technique, as shown in fig. 3, the combat logic runs in the server 140. When a state change occurs to a virtual object in the virtual environment, the server 140 sends the state synchronization result to all clients, such as clients 1 to 10.

In an exemplary example, the client 1 sends a request to the server 140, where the request is used to request the virtual object 1 to place a shared virtual item (such as a shield turret), and the server 140 determines whether the virtual object 1 can perform an operation of placing the shared virtual item, and obtains a location where the virtual item is placed when the virtual object 1 performs the operation of placing the shared virtual item. Then, the server 140 sends the placement positions of the shared virtual items to all the clients, and all the clients update the local data and the interface expression according to the placement positions of the virtual items.

Frame synchronization technique

In an alternative embodiment based on fig. 2, the server 140 employs a frame synchronization technique to synchronize with multiple clients. In the frame synchronization technique, as shown in fig. 4, combat logic is run in each client. Each client sends a frame synchronization request to the server, where the frame synchronization request carries data changes local to the client. After receiving a frame synchronization request, the server 140 forwards the frame synchronization request to all clients. And after each client receives the frame synchronization request, processing the frame synchronization request according to local combat logic, and updating local data and interface expression.

The second terminal 160 is installed and operated with an application program supporting a virtual environment. The application program may be any one of a Virtual Reality application program, a three-dimensional map program, a military simulation program, an FPS game, an MOBA game, a multi-player gunfight type survival game, a large-fleeing and killing type shooting game, a Virtual Reality (VR) application program, and an Augmented Reality (AR) program. The second terminal 160 is a terminal used by a second user who uses the second terminal 160 to control a second virtual object located in the virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, walking, running, jumping, riding, aiming, picking up, placing virtual props, attacking other virtual objects. Illustratively, the second virtual object is a second virtual character, such as a simulated character object or an animated character object. Illustratively, the first avatar object and the second avatar object belong to the same team. Illustratively, the first virtual character object is a virtual object not equipped with a shield turret, and the second virtual character object is a virtual object equipped with a shield turret.

Optionally, the first avatar object and the second avatar object are in the same virtual environment. Optionally, the first avatar object and the second avatar object may belong to the same team, the same organization, the same camp, have a friend relationship, or have temporary communication rights. Alternatively, the first virtual character and the second virtual object may belong to different camps, different teams, different organizations, or have a hostile relationship. Illustratively, the first virtual character object is a virtual object not equipped with a shield turret, the second virtual character object is a virtual object equipped with a shield turret, the first virtual object and the second virtual object belong to the same team, the second virtual object places a shield turret in the virtual environment, and the shield turret can be used by the first virtual object.

Optionally, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smartphone.

Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.

Fig. 5 is a flowchart illustrating a control method of a virtual object according to an exemplary embodiment of the present application, which may be applied to the first terminal 120 or the second terminal 160 in the computer system 100 shown in fig. 2 or other terminals in the computer system. The method comprises the following steps:

step 501, displaying a first user interface, where a shared virtual item is displayed in the first user interface, where the shared virtual item is a virtual item placed in a second virtual object.

The method comprises the steps that an application program supporting a virtual environment is run on a terminal used by a user, when the application program is run by the user, a user interface when the application program is used is correspondingly displayed on a display screen of the terminal, wherein the user interface comprises a first user interface, a shared virtual prop placed by a second virtual object is displayed on the first user interface, and the shared virtual prop is placed in the virtual environment, namely displayed in a first virtual environment picture. The virtual environment displayed by the virtual environment picture comprises: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.

The first virtual environment screen is a screen obtained by observing the virtual environment from the perspective of the first virtual object.

The perspective refers to an observation angle when observing in the virtual environment at a first person perspective or a third person perspective of the virtual object. In an embodiment of the application, a virtual object is observed by a camera model in a virtual environment from a first-person perspective.

Optionally, the camera model automatically follows the virtual object in the virtual environment, that is, when the position of the virtual object in the virtual environment changes, the camera model changes while following the position of the virtual object in the virtual environment, and the camera model is always within the preset distance range of the virtual object in the virtual environment. Alternatively, in the automatic following process, the relative positions of the camera model and the virtual object do not change.

The camera model refers to a three-dimensional model located around a virtual object in a virtual environment, and when a first-person perspective is adopted, the camera model is located near or at the head of the virtual object; when the third person perspective is adopted, the camera model may be located behind and bound to the virtual object, or may be located at any position away from the virtual object by a preset distance, and the virtual object located in the virtual environment may be observed from different angles by the camera model. Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; the camera model may be located overhead of the virtual object head when a top view is employed, which is a view of viewing the virtual environment from an overhead top view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment displayed by the user interface.

To illustrate the case where the camera model is located at an arbitrary position away from the virtual object by a preset distance, optionally, one virtual object corresponds to one camera model, and the camera model can rotate around the virtual object as a rotation center, for example: the camera model is rotated with any point of the virtual object as a rotation center, the camera model not only rotates in angle but also shifts in displacement during the rotation, and the distance between the camera model and the rotation center is kept constant during the rotation, that is, the camera model is rotated on the surface of a sphere with the rotation center as a sphere center, wherein any point of the virtual object may be a head, a trunk or any point around the virtual object, which is not limited in the embodiment of the present application. Optionally, when the camera model observes the virtual object, the center of the view angle of the camera model points in a direction in which a point of the spherical surface on which the camera model is located points at the center of the sphere.

Optionally, the camera model may also observe the virtual object at a preset angle in different directions of the virtual object.

Schematically, referring to fig. 6, a point is determined as a rotation center 102 in a virtual object 101, and a camera model rotates around the rotation center 102, and optionally, the camera model is configured with an initial position, which is a position above and behind the virtual object (for example, a position behind the brain). Illustratively, as shown in fig. 6, the initial position is position 103, and when the camera model rotates to position 104 or position 105, the direction of the angle of view of the camera model changes as the camera model rotates.

The embodiment of the present application takes a screen of a first virtual object observing a virtual environment from a first human perspective as an example.

The shared virtual item refers to a virtual item commonly used by virtual objects belonging to the same team or the same camp in a virtual environment. Ownership of the shared virtual property belongs to the virtual object for placing the virtual property, and the use right of the shared property belongs to all the virtual objects in the same team. Illustratively, the first virtual object and the second virtual object belong to the same team, and have a team-friend relationship, the second virtual object is provided with a shared virtual prop 1, the shared virtual prop 1 is placed in the virtual environment by the second virtual object, and the first virtual object can also use the shared virtual prop 1; accordingly, if a first virtual object places shared virtual item 2 in the virtual environment, then a second virtual object may also use shared virtual item 2.

In some embodiments, the shared virtual item is used when only one virtual object is used; in other embodiments, sharing a virtual item requires that at least two virtual objects be used in conjunction, and the number of virtual objects using the shared virtual item is not limited. The embodiment of the present application is described by taking only one virtual object as an example.

Step 502, in response to that the first virtual object is located in the range of the periphery of the shared virtual item and that the second virtual object has a teammate relationship with the first virtual object, displaying a second user interface, where the second user interface includes an operation control for the shared virtual item, and the operation control is used for operating the shared virtual item.

The circumferential range of the shared virtual prop refers to a circumferential range which takes the shared virtual prop as a center and takes a preset distance as a radius. When the shared virtual prop is placed in the virtual environment by the second virtual object and the second virtual object does not operate the shared virtual prop, the user controls the first virtual object to move to the peripheral side range of the shared virtual prop, because the first virtual object and the second virtual object belong to the same team, a second user interface is displayed on a client used by the user, and an operation control used for operating the shared virtual prop is displayed on the second user interface.

And step 503, in response to receiving the trigger operation on the operation control, controlling the first virtual object to use the shared virtual item.

When the terminal used by the user is a terminal with a touch display screen, such as a smart phone or a tablet computer, the triggering operation includes at least one of the following operations: single click operation, double click operation, long press operation, drag operation, slide operation, preset operation gestures, and combinations thereof. In some embodiments, the operation control is named as a control, a use control, and the like, and the name of the control is not limited in the embodiments of the present application.

When the terminal used by the user is a terminal connected with an external input device, such as a desktop computer, a notebook computer, or the like, the trigger operation includes an operation generated by the user using the external input device, for example, the user clicks an operation control displayed on the second user interface using a mouse.

And when the client receives the trigger operation on the operation control, controlling the first virtual object to use the shared virtual prop. Illustratively, the shared virtual prop is a shield turret, when the user clicks an operation control on the second user interface, a picture of the first virtual object rotating the shield turret is displayed on the second user interface, and the first virtual object uses a shield in the shield turret to resist an attack from an enemy virtual object.

In summary, in the method provided in this embodiment, through the shared virtual item placed in the virtual environment by the second virtual object, when the first virtual object belonging to the same team as the second virtual object is located in the peripheral side range of the shared virtual item, the user can control the first virtual object to use the shared virtual item through the operation control on the second user interface. The shared virtual prop is not required to be equipped by the first virtual object, the shared virtual prop is not required to be discarded to the virtual environment by the second virtual object, the first virtual object can also use the shared virtual prop, the interaction mode of the virtual props among the virtual objects is simplified, and the mode that the virtual objects of the same team use the virtual props is more flexible.

Fig. 7 is a flowchart illustrating a control method of a virtual object according to another exemplary embodiment of the present application. The method may be applied in the first terminal 120 or the second terminal 160 in the computer system 100 as shown in fig. 2 or in other terminals in the computer system. The method comprises the following steps:

step 701, displaying a first user interface, wherein a shared virtual item is displayed in the first user interface, and the shared virtual item is a virtual item placed in a second virtual object.

As shown in fig. 8, a first virtual environment screen is displayed on the first user interface 30, and the shared virtual prop 10 is displayed on the first virtual environment screen, where the first virtual environment screen is a screen obtained by observing the virtual environment from the perspective of the first virtual object, and the first virtual environment screen further includes a virtual environment element 33 (building stairway). Shared virtual item 10 is a virtual item in which a second virtual object has been placed. Since the second virtual object and the first virtual object belong to the same team, the first virtual object can also use the shared virtual item 10. Illustratively, the shared virtual prop 10 is a shield turret, which is in an unused state, and both the shield part and the turret part are in a closed state.

Step 702, in response to the first virtual object being located in the detection area, detects whether the first virtual object and the second virtual object have a teammate relationship.

In some embodiments, the shared virtual item corresponds to a detection area for detecting a relationship between a virtual object proximate to the shared virtual item and a virtual object on which the shared virtual item is placed. As shown in fig. 1, when virtual object 11 is close to shared virtual item 10, detection area 12 detects the relationship between virtual object 11 and the virtual object on which shared virtual item 10 is placed.

In some embodiments, the detection area 12 is rectangular, on the side facing the use of the virtual object 11. For example, the shared virtual item 10 is a shield turret, and when the virtual object 11 uses the shield turret, the virtual object 11 is located behind the shield to avoid being attacked, and the detection area 12 is located in an area corresponding to the rear of the shield.

In other embodiments, the detection area 12 is a circular area with the shared virtual prop 10 as a center and a preset distance as a radius. For example, the shared virtual prop 10 is a protective cover (hemispherical shape), the virtual object 11 enters a region (circular region) of the protective cover corresponding to the ground to avoid being damaged, and the detection region 12 is in accordance with the shape of the region of the protective cover corresponding to the ground.

It should be noted that the detection area 12 is an area hidden on the user interface, that is, the detection area 12 cannot be seen on the user interface by the user.

How to detect whether there is a teammate relationship between virtual objects using a detection region will be described.

In some embodiments, the shared virtual item corresponds to a crash box, which corresponds to the detection zone. Step 702 may be replaced with the following steps:

step 7021, in response to the interaction of the three-dimensional model of the first virtual object with the trigger of the crash box, generating first trigger information, the first trigger information including the entry of the first virtual object into the detection area.

The collision Box (Box Collider) refers to a virtual model invisible on a user interface arranged on the target area, and schematically, the shape of the collision Box simulates the boundary of the target area, and the closer the shape and the size of the collision Box are to the range of the target area, the more real the detection result of the collision Box is. Illustratively, for ease of calculation, the shape of the crash box is a regular, computationally-favorable shape, or a combination of a plurality of regular, computationally-favorable shapes, such as: cuboids, cubes, cylinders, spheres, cones and combinations thereof. The crash box is used to detect a collision of a three-dimensional model (e.g., a virtual object or a virtual environment element) in a virtual environment with the crash box. When a collision occurs and collides with the three-dimensional model, the crash box acquires information of the three-dimensional model (such as object information or virtual object information corresponding to the three-dimensional model), a collision point, a collision direction, a rebound direction, and the like. Whether the collision box collides with the three-dimensional model is judged by detecting whether the collision box and the three-dimensional model generate intersection.

Taking the shared virtual item 10 as a shield turret as an example, as shown in fig. 9, the client first obtains the location of the center point 35 (centerfos) of the shield turret. And (3) generating a cuboid which is configured with length, width and height (m _ boxhalfextensions) in advance by taking the central point as a mass center point of the cuboid, wherein the cuboid is the collision box.

The crash box is a carrier for a trigger (IsTrigger), which is an attribute of the crash box. The trigger detection is used for both detecting contact of an object and not letting collision detection influence the movement of the object, or for detecting whether an object passes a certain area in the space. For example, the motion state of the table tennis when falling on the ground and bouncing is simulated by using the collision box; the trigger is used for simulating the state that the door is automatically opened when a person approaches the position of the door.

Illustratively, the central point 35 is used as the center of the crash box, a crash box with the size twice the actual size of the shield turret is generated, and the trigger attribute of the crash box is set to True (True), that is, the crash box detection is turned off, and the trigger is used to detect whether the user is close to the shield turret.

When the three-dimensional model 34 of the first virtual object interacts with the trigger of the crash box of the shared virtual item 10, first trigger information is generated, and the client determines that the first virtual object enters the detection area 12 according to the first trigger information.

Step 7022, the identifier of the first virtual object is determined according to the first trigger information.

The client determines the identifier of the virtual object entering the detection area 12 according to the first trigger information, and determines that the virtual object entering the detection area is the first virtual object according to the identifier.

Step 7023, in response to the identifier of the first virtual object and the identifier of the second virtual object belonging to the same team, determining that the first virtual object and the second virtual object have a teammate relationship.

Illustratively, the client is configured with a team list participating in the battle in advance, and the team list includes information of all virtual objects participating in the battle, such as identification of the virtual objects, grade of the virtual objects, and item identification of virtual items equipped with the virtual objects. The client determines, according to the identifier of the first virtual object, a team in which the first virtual object is located and a team friend of the first virtual object, where illustratively, the identifier of the first virtual object belongs to team k, and the identifier of the second virtual object also belongs to team k, and then the client determines that the first virtual object and the second virtual object have a team friend relationship.

Step 703, in response to the first virtual object having a teammate relationship with the second virtual object, displaying a second user interface.

And when the client determines that the first virtual object and the second virtual object for placing the shared virtual item are teammates, displaying an operation control for sharing the virtual item on the second user interface. As shown in fig. 10 (a), a shared virtual item 10 and an operation control 36 for operating the shared virtual item are displayed on the second user interface.

In some embodiments, if a second virtual object (i.e., the virtual object on which the shared virtual item is placed) is close to shared virtual item 10, a second user interface displayed on the client corresponding to the second virtual object is as shown in fig. 10 (b). On second user interface 40, shared virtual item 10, operation control 36, and pick-up control 37 are displayed. Different from the second user interface displayed by the client corresponding to the first virtual object, since the second virtual object is a virtual object in which the shared virtual item 10 is placed, i.e. an owner of the shared virtual item 10, the second virtual object has the right to "pick up" the shared virtual item 10 from the virtual environment, and the pick-up control 37 is displayed on the second user interface 40. In other embodiments, the pick-up control 37 is named as a pick-up control, and the names of the controls are not limited in the embodiments of the present application.

It will be appreciated that the second user interface 40 includes a view of the virtual environment from the perspective of the second virtual object.

Step 704, in response to receiving the trigger operation on the operation control, controlling the first virtual object to use the shared virtual item.

Illustratively, the user clicks on the operation control 36, and the client controls the second virtual object to operate the shared virtual prop 10. For example, the shared virtual prop 10 is a shield turret, and when the user clicks the operation control 36, the shield of the shield turret is expanded, and the turret is upright. The shield turret is in use.

Illustratively, after the user clicks the operation control 36, a user interface 41 shown in fig. 11 is displayed, and the shared virtual item 10 is displayed in the user interface 41, at this time, the shared virtual item 10 is in a use state. Also displayed on the user interface 41 are firing controls 42 and aiming controls 43. With the shared virtual item 10 as a shield turret, when the user clicks the firing control 42, the client controls the turret in the shield turret to fire, and a picture that the first virtual object holds the control handle of the shield turret is displayed on the user interface 41. I.e., the first virtual object is using a shared virtual item. Clicking on exit control 44 by the user may control the first virtual object to actively exit shared virtual item 10 without disabling shared virtual item 10 after exiting.

In summary, in the method provided in this embodiment, the detection area is set in the peripheral side range of the shared virtual item, and when the first virtual object is located in the detection area, the user interface for operating the shared virtual item is displayed, so that the first virtual object can directly operate the shared virtual item, the shared virtual item is not required to be equipped by the first virtual object, and the shared virtual item is not required to be discarded into the virtual environment by the second virtual object, and the first virtual object can also use the shared virtual item, thereby simplifying the interaction manner of the virtual items between the virtual objects, and making the manner in which the virtual items are used by the virtual objects of the same team more flexible.

Through setting the collision box corresponding to the detection area for the shared virtual prop, when the three-dimensional model of the first virtual object is interacted with the trigger of the collision box, the client can detect the relationship between the first virtual object and the second virtual object, and when the first virtual object and the second virtual object are teammates, the user interface for operating the shared virtual prop is displayed, so that the first virtual object can directly operate the shared virtual prop, the interaction mode of the virtual props between the virtual objects is simplified, and the mode that the virtual objects of the same team use the virtual prop is more flexible.

In the optional embodiment based on fig. 7, before the first virtual object operates the shared virtual item, the second virtual object needs to place the shared virtual item first, and the shared virtual item is in an unoperated state, and the first virtual object can operate the shared virtual item, and the process of placing the shared virtual item by the second virtual object and the state of the shared virtual item in the unoperated state are described below.

Fig. 12 illustrates a control method for a virtual object according to another embodiment of the present application, which can be applied to the first terminal 120 or the second terminal 160 in the computer system 100 shown in fig. 2 or other terminals in the computer system. The method comprises the following steps:

1. the second virtual object places a shared virtual prop.

And step 710, displaying a third user interface, wherein the third user interface comprises a second virtual environment picture and a placement control for sharing the virtual prop, and the second virtual environment picture is obtained by observing a second virtual object from a visual angle.

As shown in fig. 13, a second virtual environment screen and a placement control 51 sharing a virtual prop are displayed on the third user interface 50. Illustratively, a placement control 51 is displayed on the third user interface 50 in a highlighted or blinking form to prompt the user that can control the second virtual object to place the shared virtual prop.

Step 720, in response to receiving the placement operation on the placement control, obtaining a pre-placement position of the shared virtual item in the virtual environment.

Illustratively, after the user clicks the placement control 50, the client obtains the pre-placement position of the shared virtual item.

A pre-placement location refers to a location in a virtual environment where a shared virtual item is pre-planned to be potentially placed. Illustratively, the shared virtual prop is a shield turret, and when the user does not determine the placement position of the shield turret, the user is prompted in a manner that a blue charm is overlaid on the shield turret that the placement position of the shield turret is not determined yet, and the shield turret can be placed at the position currently selected by the user. Alternatively, the user is prompted in the form of a green overlay on the shield turret that the placement of the shield turret has not been determined and that the shield turret can be placed at the location currently selected by the user. The color of the map is not limited in the embodiment of the present application.

And responding to the fact that the shared virtual prop does not meet the placement condition, and displaying a fifth user interface, wherein the fifth user interface comprises prompt information which is used for prompting that the shared virtual prop cannot be placed.

As shown in fig. 14, when the location selected by the user is not suitable for placing the shared virtual item, for example, the user places the shared virtual item 10 on a staircase, the user is prompted in the form of a red paste overlaid on the shield turret to prohibit the placement of the shield turret at the location currently selected by the user. In some embodiments, the prompt message 53 is also displayed on the user interface, and the prompt message 53 is: cannot be placed there. The embodiment of the application does not limit the specific content of the prompt message.

Step 730, in response to that the pre-placement position meets the placement condition, displaying a fourth user interface, where the fourth user interface includes the placed shared virtual item and a control identifier corresponding to the shared virtual item.

The placing conditions include at least one of the following conditions:

1) the terrain in the virtual environment belongs to flat terrain;

such as a flat grass surface, or a flat roof, or a flat hill, etc.

2) The area occupied by the shared virtual props is smaller than that of the placement positions;

for example, the area occupied by the shared virtual item is 10 (unit in the virtual environment), and the area corresponding to the placement position is 20 (unit in the virtual environment).

3) And no virtual environment element exists in the placement area corresponding to the shared virtual prop.

For example, in the area where the shared virtual prop is placed as shown in fig. 14, the virtual environment element "stairs" exists in the area where the shared virtual prop 10 is placed, and therefore, the placement area does not satisfy the placement condition.

When the placement position of the shared virtual item satisfies the placement condition, the placed shared virtual item 10 and the identifier 32 of the shared virtual item are displayed on the user interface, as shown in fig. 8.

2. The shared virtual item is in an unoperated state.

Step 740, in response to the second virtual object not being in the detection area, switching the state of the shared virtual item to the state of not being operated.

The above step 740 may be replaced by the following steps:

step 7401, in response to the trigger interaction of the three-dimensional model with the crash box being a second virtual object, generating second trigger information, the second trigger information including the exit of the second virtual object from the detection zone.

And similarly, whether the first virtual object and the second virtual object have a teammate relationship is detected, and the fact that the second virtual object exits from the detection area corresponding to the shared virtual prop is determined by using triggering information generated after the trigger of the collision box interacts with the three-dimensional model of the second virtual object.

Step 7402, determining that the shared virtual item is in an unoperated state according to the second trigger information.

After the second virtual object exits the detection area, the client determines that the shared virtual item is in an unoperated state, that is, the shared virtual item is operated without the virtual object.

It will be appreciated that any virtual object entering or leaving the detection zone may be detected using the trigger of the crash box. For example, the shared virtual item is placed by a virtual object a, when the virtual object a exits from a detection area of the shared virtual item, trigger information 1 is generated, and the client determines that the shared virtual item is in an unoperated state according to the trigger information 1; when the virtual object b enters a detection area of the shared virtual prop, generating trigger information 2, and determining that the shared virtual prop is in a state operated by the virtual object b by the client according to the trigger information 2; when the virtual object b exits the detection area of the shared virtual prop, generating trigger information 3, and determining that the shared virtual prop is in an unoperated state by the client according to the trigger information 3; when the virtual object c enters the detection area of the shared virtual prop, trigger information 4 is generated, and the client determines that the shared virtual prop is in a state operated by the virtual object c according to the trigger information 4. In this way, when a virtual object operates the shared virtual item, the virtual object that operates the shared virtual item immediately before the virtual object may be any virtual object in the same team.

In summary, in the method provided in this embodiment, when the pre-placement position where the shared virtual item is placed by the second virtual object meets the placement condition, a user interface for controlling the shared virtual item is displayed. And the second virtual object can definitely determine the placement position of the shared virtual prop, so that the shared virtual prop is placed at a proper position.

The placing operation of the shared virtual props under different conditions can be met by setting various placing conditions, so that a user can control the second virtual object to place the shared virtual props according to the actual conditions in the virtual environment.

When the shared virtual prop does not meet the placement condition, the user is prompted to control the replacement and placement position of the second virtual object by displaying the prompt message on the user interface, so that the user can replace the placement position at the first time, and the operation efficiency of the shared virtual prop is improved.

In an alternative embodiment based on fig. 7, the shared virtual item is destructible, and the destroying manner of the shared virtual item includes at least one of the following manners:

1. and when a certain condition is met, the client automatically destroys the shared virtual prop.

And responding to the placement time of the shared virtual prop exceeding a time threshold value, and switching the shared virtual prop into a failure state.

Illustratively, the time threshold is 2 minutes, and the client switches the shared virtual item to the disabled state when the shared virtual item is placed in the virtual environment and the time starts counting from the moment when the shared virtual item is placed in the virtual environment, and when the placement time exceeds 2 minutes.

And switching the shared virtual item to a failure state in response to the usage value of the shared virtual item decreasing to zero.

The usage value is used to characterize the duration of the shared virtual item in the virtual environment, and in some embodiments, is also named blood streak. The use value of the shared virtual prop is similar to the life value of the virtual object, and when the shared virtual prop is attacked, the use value can be reduced to different degrees according to the attack mode. When the use value is reduced to zero, the shared virtual prop is invalid, and the virtual object cannot operate the shared virtual prop.

2. And destroying the shared virtual props by carrying out destroying operation on the virtual objects for placing the shared virtual props.

And responding to the shared virtual prop receiving the failure operation, and switching the shared virtual prop into a failure state, wherein the failure operation is generated by the second virtual object.

For example, a destruction control (or a failure control, etc.) is displayed on the user interface, and after the user clicks the destruction control, the client switches the shared virtual item to a failure state.

It is understood that the above-mentioned means can be implemented individually, or in any combination of two or all of them.

Illustratively, the shared virtual property is a shield turret, and when the shield turret is destroyed, an animation about the destruction is played, for example, a barrel of the turret drops first, and then the whole shield turret explodes.

In an alternative embodiment based on fig. 7, when there is an enemy virtual object approaching the shared virtual item, the method for controlling the virtual object further includes the following steps:

step 810, in response to that the third virtual object is located in the peripheral side range of the shared virtual object and that the third virtual object has an enemy relationship with the second virtual object, controlling the shared virtual object to reduce the life value of the third virtual object.

In some embodiments, step 810 is performed after step 701 is performed; in other embodiments, step 810 is performed after step 730, i.e., after the shared virtual item is placed on the second virtual object.

And in response to the third virtual object being located in the detection area corresponding to the shared virtual item, detecting the relationship between the third virtual object and the second virtual object, and in response to the third virtual object having an enemy relationship with the second virtual object, controlling the shared virtual item to reduce the life value of the third virtual object.

Similarly, detecting the hostility of the third virtual object to the first virtual object is also detected by the trigger of the crash box.

Generating third trigger information in response to a trigger interaction of a third virtual object that is a three-dimensional model crash box, the third trigger information including entry of the third virtual object into the detection area; determining the identifier of the third virtual object according to the third trigger information; in response to the identification of the third virtual object belonging to a different team than the identification of the second virtual object, determining that the third virtual object has a hostile relationship with the second virtual object.

When the third virtual object has an enemy relation with the second virtual object, the client controls the shared virtual prop to attack the third virtual object. In some embodiments, when the shared virtual item is not operated by any virtual object, the client automatically controls the shared virtual item to attack the third virtual object, and performs a tracking attack on the third virtual object according to a moving track of the third virtual object. In other embodiments, when the shared virtual item is controlled by the second virtual object or another virtual pair belonging to the same team as the second virtual object, a message is sent to a client corresponding to the virtual object controlling the shared virtual item, where the message is used to prompt whether the virtual object automatically attacks the third virtual object, and if the virtual object selects to automatically attack the third virtual object, the client controls the shared virtual item to automatically attack the third virtual object. For example, a message "automatically attack or not? When the user selects 'yes', the client controls the shared virtual prop to automatically attack the third virtual object, and when the user selects 'no', the user controls the virtual object to attack the third virtual object by using the shared virtual object through manual operation.

Step 820, in response to the life value of the third virtual object decreasing to zero, converting the virtual item equipped by the third virtual object into a shared virtual item corresponding to the virtual item.

And after the third virtual object is attacked by the shared virtual prop, the life value of the third virtual object is continuously reduced, and the life value is used for representing the survival time of the virtual object in the virtual environment. When the life value of the third virtual object is reduced to zero, the third virtual object finishes the life in the virtual environment, the virtual prop equipped with the third virtual object is dropped in the virtual environment, and the client converts the virtual prop equipped with the third virtual object into a corresponding shared virtual prop. Illustratively, the virtual prop dropped by the third virtual object in the virtual environment comprises a pistol, a dagger and a sniping gun, the client converts the pistol into a shared pistol, converts the dagger into a shared dagger, and converts the sniping gun into a shared sniping gun, the ownership of the converted shared virtual prop is assigned to the second virtual object, and a user controlling the second virtual object can select whether to place a newly obtained shared virtual prop in the virtual environment. If the user selects to place the newly obtained shared virtual prop, selecting and placing the placing position of the newly obtained shared virtual prop by using the method provided by the embodiment; if the user chooses not to place the newly acquired shared virtual item, the newly acquired shared virtual item will fit in the backpack grid of the second virtual object.

In some embodiments, the client converts the part of the shared virtual props dropped by the third virtual object into shared virtual props, or preferentially converts the part of the shared virtual props with higher damage values according to descending order of the damage values of the virtual props.

In conclusion, when the client detects that the enemy virtual object is close to the shared virtual prop, the client obtains more types of shared virtual props by attacking the enemy virtual object, so that the intensity of battle engagement is increased, and the obtaining efficiency of the shared virtual prop is improved.

In some embodiments, the shared virtual items include defensive types of virtual items, such as hemispherical shields, that can accommodate multiple virtual objects of the same team and slow the rate of decline of the life value of the virtual objects of the same team. The virtual object on which the shield is placed may retract the shield and all virtual objects in the full team may move the shield into position.

In other embodiments, the shared virtual props further comprise attack type virtual props, such as heavy machine guns or cannonballs, and the virtual objects for placing the virtual props can recover the virtual props or replace the placing positions of the virtual props, and all the virtual objects in the whole team can use the virtual props.

The control method of the virtual object provided by the embodiment of the present application is described by taking a game and an example in which the shared virtual item is a shield turret. FIG. 15 is a flowchart illustrating a method for controlling a game-based virtual object according to an exemplary embodiment of the present application. The method may be applied in the first terminal 120 or the second terminal 160 in the computer system 100 as shown in fig. 2 or in other terminals in the computer system. The method comprises the following steps:

step 1501, start.

Taking the terminal as an example of a smart phone, the user enters the game application program, and the smart phone displays the user interface corresponding to the game application program.

And step 1502, whether the continuous killing reward points reach 300 points.

In some embodiments, the user needs to select a reward for use in the play before the play begins, the reward being for equipping the user-controlled virtual object with a corresponding virtual prop. The continuous killing reward means that the virtual objects continuously kill a plurality of virtual objects in the virtual environment, and a certain integral is obtained when one virtual object is killed. As shown in fig. 16, a linking and killing skill control 61 is displayed on the user hall interface, and after the user clicks the linking and killing skill control 61, a linking and killing reward list is displayed, and the user can use the equipment that the user wants to select by clicking.

Illustratively, if the continuously-killed reward points reach 300 minutes, the virtual object can be equipped with a shield turret; if the consecutive killing bonus points do not reach 300 points, go to step 1503.

Illustratively, as shown in fig. 13, a progress bar 52 is displayed on the user interface 50, the progress bar 52 is used for indicating the accumulated degree of the continuously killed credits, and when the continuously killed awarded credits reach 300 points, the control 51 prompts the user to equip the virtual object with a shield turret in a highlighted manner.

In other embodiments, the continuous killing reward may be replaced by other conditions, such as the number of virtual objects killed is greater than 5, or the number of teammates treated reaches 5, and so on. This is not limited in the examples of the present application.

In step 1503, the shield turret cannot be equipped.

If the continuous killing reward points do not reach 300 points, the virtual object cannot be equipped with a shield gun turret.

Step 1504, whether the pre-positioned position of the shield turret is appropriate.

The game application program judges whether a collision box of the three-dimensional model and a collision box of the shield turret interact or not in the virtual environment, if the interaction exists, the preset position of the shield turret is unreasonable, the shield turret cannot be released, and if the interaction does not exist, the preset position of the shield turret is reasonable, and the shield turret can be placed at the preset position. Illustratively, the place call interface is as follows:

bool find=Physics.CheckBox(centerPos,m_BoxHalfExtents,m_Owner.Rotation,layerMask);

wherein, bool find represents Boolean function, Physics. checkBox represents physical collision detection of the crash box, centrPos represents the center point of the three-dimensional model, m _ BoxHalfExtents represents length, width and height parameters of the crash box, m _ Owner. rotation angle of the crash box, and layerMask represents the trigger in only selected layerMask layer.

If the pre-placement position is appropriate, go to step 1506; if the pre-placement is not appropriate, step 1505 is entered.

At step 1505, the user is prompted to change placement positions.

And step 1506, placing a shield turret.

Step 1507, whether the virtual object is close to the shield turret.

If the virtual object is close to the shield turret, go to step 1509; if the virtual object is not close to the shield turret, proceed to step 1508.

Step 1508, no change.

Step 1509, whether the shield turret is in an unmanned state.

If the shield turret is in an unmanned control state, entering step 1511; if the shield turret is in the manned state, the process proceeds to step 1510.

Step 1510, no change.

And 1511, judging whether the virtual object belongs to the same battle with the virtual object for placing the shield gun turret.

If the virtual object close to the shield gun turret and the virtual object for placing the shield gun turret belong to the same camp, the step 1513 is executed; if the virtual object near the shield turret and the virtual object on which the shield turret is placed belong to different camps, go to step 1512.

Step 1512, no change.

In step 1513, the operation button is displayed.

And when the virtual object close to the shield turret and the virtual object for placing the shield turret belong to the same camp, displaying a button for controlling the shield turret.

Step 1514, virtual object for placement of shield turrets.

If the virtual object close to the shield turret is the virtual object for placing the shield turret, go to step 1516; if the virtual object near the shield turret is not the virtual object on which the shield turret is placed, proceed to step 1515.

In step 1515, only the operation buttons are displayed.

The virtual object which is a teammate with the virtual object for placing the shared virtual prop can only operate the shield turret, for example, rotating the shield turret, attacking other virtual objects by using the turret, and the like. Illustratively, the virtual object includes a firing button, an aiming button, and a away button on the user interface when operating the shield turret. When the virtual object operates the shield turret, as shown in fig. 17, a shield 63 of the shield turret is deployed for resisting an attack; when the user clicks the aim button, the camera model bound to the virtual object zooms in, displays the shield turret by zooming in and out of Field of View (FOV), and the user aims at the target through the aiming area 62.

Step 1516, display pick button.

The virtual object for placing the shared virtual item can pick up the shared virtual item. That is, the placed shared virtual item is "picked up" and placed in another location.

And step 1517, judging whether the life cycle of the shield gun turret is finished.

When the user controls the virtual object to operate the shield turret, if the shield turret fails, the virtual object is far away from the shield turret, and the virtual prop used by the virtual object is switched to the virtual prop used last time. For example, the virtual object uses a pistol at this time, then, the virtual object operates a shield turret, the shield turret is destroyed during the operation of the virtual object, and the virtual prop used by the virtual object is switched back to the pistol.

And 1518, the shield turret continues to be used.

And 1519, destroying the shield gun turret.

The above steps 1502 to 1519 can be repeatedly performed in one game until the one game is finished.

The "no change" in the above embodiments means that the interface display of the user is not changed. Illustratively, during the game, when the virtual object performs the corresponding action, the client plays the voice corresponding to the action, for example, the voice when the shield gun turret is available: "Shield Turret prepared", "Shield Turret isready! "; placing a shield turret: "Shield Turret deployed", "Shield Turret deployed! "; own party uses the shield turret speech: "my party Shield Turret is deployed completely", "Friendly Shield Turret deployed! "; the enemy uses the shield turret speech: "enemy Shield Turret is deployed", "assets' Shield Turret deployed! "and the like.

In summary, in the method provided in this embodiment, when the virtual objects belonging to the same team are located in the peripheral side range of the shared virtual item through the shared virtual item placed in the virtual environment by the virtual object, the user can control the virtual object to use the shared virtual item through the operation control on the user interface. The interaction mode of the virtual objects relative to the virtual props is simplified, so that the mode that the virtual objects of the same team use the virtual props is more flexible.

The above embodiments describe the above method based on the application scenario of the game, and the following describes the above method by way of example in the application scenario of military simulation.

The simulation technology is a model technology which reflects system behaviors or processes by simulating real world experiments by using software and hardware.

The military simulation program is a program specially constructed for military application by using a simulation technology, and is used for carrying out quantitative analysis on sea, land, air and other operational elements, weapon equipment performance, operational actions and the like, further accurately simulating a battlefield environment, presenting a battlefield situation and realizing the evaluation of an operational system and the assistance of decision making.

In one example, soldiers establish a virtual battlefield at a terminal where military simulation programs are located and fight in a team. The soldier controls a virtual object in the virtual battlefield environment to perform at least one operation of standing, squatting, sitting, lying on the back, lying on the stomach, lying on the side, walking, running, climbing, driving, shooting, throwing, attacking, injuring, reconnaissance, close combat and other actions in the virtual battlefield environment. The battlefield virtual environment comprises: at least one natural form of flat ground, mountains, plateaus, basins, deserts, rivers, lakes, oceans and vegetation, and site forms of buildings, vehicles, ruins, training fields and the like. The virtual object includes: virtual characters, virtual animals, cartoon characters, etc., each virtual object having its own shape and volume in the three-dimensional virtual environment occupies a part of the space in the three-dimensional virtual environment.

Based on the above, in one example, a virtual battlefield includes two soldiers competing, respectively, a first team including a virtual object a controlled by soldier a and a virtual object B controlled by soldier B, and a second team including a virtual object C controlled by soldier C and a virtual object D controlled by soldier D.

Illustratively, the virtual object a places the shared virtual item 1 in the virtual environment, and when the virtual object a does not operate the shared virtual item 1, the virtual object b may replace the shared virtual item 1, and the virtual object a may also replace the placement position of the shared virtual item 1. But virtual object c and virtual object d may not operate on shared virtual item 1. When the use value of the shared virtual item 1 is reduced to zero, the shared virtual item is switched to a failure state.

Illustratively, a virtual object c places a shared virtual item 2 in a virtual environment, the shared virtual item 2 and the shared virtual item 1 belong to different types of virtual items, when the virtual object c does not operate the shared virtual item 2, a virtual object d can operate the shared virtual item 2, and the virtual object c can also replace the placement position of the shared virtual item 2. The virtual object c performs failure operation on the shared virtual prop 2, and the shared virtual prop 2 is switched to a failure state.

In summary, in this embodiment, the control method of the virtual object is applied to a military simulation program, soldiers in the same team can use the shared virtual prop, so that the cooperation degree between the soldiers is improved, a real simulation is performed on an actual field, and the soldiers are trained better.

The following are embodiments of the apparatus of the present application, and for details that are not described in detail in the embodiments of the apparatus, reference may be made to corresponding descriptions in the above method embodiments, and details are not described herein again.

Fig. 18 is a schematic structural diagram illustrating a control apparatus for a virtual object according to an exemplary embodiment of the present application. The apparatus can be implemented as all or a part of a terminal by software, hardware or a combination of both, and includes:

a display module 1810, configured to display a first user interface, where a shared virtual item is displayed in the first user interface, and the shared virtual item is a virtual item placed in a second virtual object;

the display module 1810 is configured to display a second user interface in response to that the first virtual object is located in a range around the shared virtual item and that the second virtual object has a teammate relationship with the first virtual object, where the second user interface includes an operation control for sharing the virtual item, and the operation control is used to operate the shared virtual item;

the control module 1820 is configured to, in response to receiving the trigger operation on the operation control, control the first virtual object to use the shared virtual item.

In an optional embodiment, the shared virtual item corresponds to a detection area;

the display module 1810 is configured to detect whether the first virtual object and the second virtual object have a teammate relationship in response to the first virtual object being located in the detection area; in response to the first virtual object having a teammate relationship with the second virtual object, a second user interface is displayed.

In an optional embodiment, the shared virtual item corresponds to a collision box, and the collision box corresponds to the detection area; the apparatus includes a processing module 1830;

the processing module 1830 is configured to generate first trigger information in response to interaction of the three-dimensional model of the first virtual object with a trigger of the crash box, where the first trigger information includes entry of the first virtual object into the detection area; determining the identifier of the first virtual object according to the first trigger information; in response to the identification of the first virtual object and the identification of the second virtual object belonging to the same team, determining that the first virtual object and the second virtual object have a teammate relationship.

In an alternative embodiment, the processing module 1830 is configured to switch the state of the shared virtual item to the non-operated state in response to the second virtual object not being within the detection area.

In an alternative embodiment, the processing module 1830 is configured to generate a second trigger information in response to the three-dimensional model of the second virtual object interacting with the trigger of the crash box, the second trigger information including that the second virtual object exits the detection area; and determining that the shared virtual prop is in an unoperated state according to the second trigger information.

In an optional embodiment, the display module 1810 is configured to display a third user interface, where the third user interface includes a second virtual environment picture and a placement control for sharing a virtual prop, and the second virtual environment picture is observed from a perspective of a second virtual object; in response to receiving a placing operation on a placing control, obtaining a pre-placing position of a shared virtual prop in a virtual environment; and responding to the preset position meeting the setting condition, and displaying a fourth user interface, wherein the fourth user interface comprises the set shared virtual prop and a control identification corresponding to the shared virtual prop.

In an alternative embodiment, the placing condition includes at least one of the following conditions: the terrain in the virtual environment belongs to flat terrain; the area occupied by the shared virtual props is smaller than that of the placement positions; and no virtual environment element exists in the placement area corresponding to the shared virtual prop.

In an optional embodiment, the display module 1810 is configured to, in response to that the shared virtual item does not meet the placement condition, display a fifth user interface, where the fifth user interface includes a prompt message for prompting that the shared virtual item cannot be placed.

In an alternative embodiment, the processing module 1830 is configured to switch the shared virtual item to the disabled state in response to the placement time of the shared virtual item exceeding a time threshold; or, in response to the shared virtual item receiving the failure operation, switching the shared virtual item to a failure state, wherein the failure operation is generated by the second virtual object; or, in response to the usage value of the shared virtual item decreasing to zero, switching the shared virtual item to a failure state, where the usage value is used to represent the duration of the shared virtual item in the virtual environment.

Referring to FIG. 19, a block diagram of a computer device 1900 according to an exemplary embodiment of the present application is shown. The computer device 1900 may be a portable mobile terminal, such as: smart phones, tablet computers, MP3 players (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4). Computer device 1900 may also be referred to by other names such as user equipment, portable terminal, etc.

Generally, computer device 1900 includes: a processor 1901 and a memory 1902.

The processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1901 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.

The memory 1902 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1902 is used to store at least one instruction for execution by the processor 1901 to implement the control method of a virtual object provided in embodiments of the present application.

In some embodiments, computer device 1900 may also optionally include: a peripheral interface 1903 and at least one peripheral. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1904, a touch screen display 1905, a camera 1906, an audio circuit 1907, a positioning component 1908, and a power supply 1909.

The peripheral interface 1903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to the processor 1901 and the memory 1902. In some embodiments, the processor 1901, memory 1902, and peripherals interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1901, the memory 1902, and the peripheral interface 1903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.

The Radio Frequency circuit 1904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc. The radio frequency circuit 1904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1904 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.

The touch display 1905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. The touch display screen 1905 also has the ability to capture touch signals at or above the surface of the touch display screen 1905. The touch signal may be input to the processor 1901 as a control signal for processing. The touch screen display 1905 is used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the touch display 1905 may be one, providing the front panel of the computer device 1900; in other embodiments, the touch display 1905 can be at least two, each disposed on a different surface of the computer device 1900 or in a folded design; in still other embodiments, touch display 1905 may be a flexible display disposed on a curved surface or on a folded surface of computer device 1900. Even further, the touch display screen 1905 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The touch Display screen 1905 may be made of LCD (Liquid Crystal Display), OLED (organic light-Emitting Diode), or the like.

The camera assembly 1906 is used to capture images or video. Optionally, camera assembly 1906 includes a front camera and a rear camera. Generally, a front camera is used for realizing video call or self-shooting, and a rear camera is used for realizing shooting of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera and a wide-angle camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a VR (Virtual Reality) shooting function. In some embodiments, camera head assembly 1906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.

Audio circuitry 1907 is used to provide an audio interface between a user and computer device 1900. The audio circuitry 1907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1901 for processing, or inputting the electric signals into the radio frequency circuit 1904 for realizing voice communication. The microphones may be multiple and placed at different locations on the computer device 1900 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuitry 1904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1907 may also include a headphone jack.

The Location component 1908 is used to locate the current geographic Location of the computer device 1900 for navigation or LBS (Location Based Service). The Positioning component 1908 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.

Power supply 19012 is used to provide power to the various components in computer device 1900. The power source 1909 can be alternating current, direct current, disposable batteries, or rechargeable batteries. When power supply 1909 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.

In some embodiments, computer device 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: acceleration sensor 1911, gyro sensor 1912, pressure sensor 1913, fingerprint sensor 1914, optical sensor 1915, and proximity sensor 1916.

The acceleration sensor 1911 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the computer apparatus 1900. For example, the acceleration sensor 1911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1901 may control the touch screen 1905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1911. The acceleration sensor 1911 may also be used for acquisition of motion data of a game or a user.

The gyro sensor 1912 may detect a body direction and a rotation angle of the computer device 1900, and the gyro sensor 1912 may cooperate with the acceleration sensor 1911 to acquire a 3D motion of the user with respect to the computer device 1900. From the data collected by the gyro sensor 1912, the processor 1901 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.

Pressure sensors 1913 may be disposed on a side bezel of computer device 1900 and/or on a lower layer of touch display 1905. When the pressure sensor 1913 is provided on the side frame of the computer apparatus 1900, a user's grip signal to the computer apparatus 1900 can be detected, and left-right hand recognition or shortcut operation can be performed based on the grip signal. When the pressure sensor 1913 is disposed at the lower layer of the touch display 1905, it is possible to control the operability control on the UI interface according to the pressure operation of the user on the touch display 1905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.

The fingerprint sensor 1914 is configured to collect a fingerprint of the user to identify the user based on the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1901 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. Fingerprint sensor 1914 may be disposed on the front, back, or side of computer device 1900. When a physical button or vendor Logo is provided on computer device 1900, fingerprint sensor 1914 may be integrated with the physical button or vendor Logo.

The optical sensor 1915 is used to collect the ambient light intensity. In one embodiment, the processor 1901 may control the display brightness of the touch screen 1905 based on the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1905 is turned down. In another embodiment, the processor 1901 may also dynamically adjust the shooting parameters of the camera assembly 1906 according to the intensity of the ambient light collected by the optical sensor 1915.

Proximity sensor 1916, also known as a distance sensor, is typically disposed on the front side of computer device 1900. Proximity sensor 1916 is used to capture the distance between the user and the front of computer device 1900. In one embodiment, the touch display 1905 is controlled by the processor 1901 to switch from a bright screen state to a dark screen state when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 is gradually decreasing; when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 gradually becomes larger, the touch display 1905 is controlled by the processor 1901 to switch from the breath-screen state to the bright-screen state.

Those skilled in the art will appreciate that the architecture shown in FIG. 19 is not intended to be limiting of computer device 1900 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.

The embodiments of the present application further provide a computer device, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the instruction, the program, the code set, or the set of instructions is loaded and executed by the processor to implement the control method for a virtual object as provided in the above method embodiments.

The present application further provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the control method for a virtual object provided in the foregoing method embodiments.

Embodiments of the present application also provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. A processor of the computer device reads computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the control method of the virtual object provided by the above-mentioned method embodiments.

It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.

It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

36页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:虚拟角色位置同步方法、装置、介质及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类