Method and device for displaying picture in virtual scene, computer equipment and storage medium

文档序号:520491 发布日期:2021-06-01 浏览:16次 中文

阅读说明:本技术 虚拟场景中画面展示方法、装置、计算机设备及存储介质 (Method and device for displaying picture in virtual scene, computer equipment and storage medium ) 是由 汪涛 于 2021-01-22 设计创作,主要内容包括:本申请是关于一种虚拟场景中画面展示方法、装置、计算机设备及存储介质,涉及虚拟场景技术领域。该方法包括:展示虚拟场景画面,虚拟场景画面中包括第一虚拟车辆;基于第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从第二虚拟车辆中确定目标虚拟车辆;第二虚拟车辆是位于第一虚拟车辆后方的虚拟车辆;在虚拟场景画面中展示辅助画面;辅助画面是以目标虚拟车辆为焦点,通过对应第一虚拟车辆设置的虚拟摄像头进行拍摄的画面。通过上述方案可以灵活的确定各个时刻对应的目标虚拟车辆,从而使得辅助画面能够尽可能的显示有效的画面,提高了辅助画面传递有益于用户操作的信息的效率,从而提高用户控制虚拟车辆时的交互效率。(The application relates to a method and a device for displaying pictures in a virtual scene, computer equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: displaying a virtual scene picture, wherein the virtual scene picture comprises a first virtual vehicle; determining a target virtual vehicle from the second virtual vehicles based on a relative distance between the first virtual vehicle and at least one second virtual vehicle; the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle; displaying an auxiliary picture in a virtual scene picture; the auxiliary picture is a picture which takes the target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle. By the scheme, the target virtual vehicles corresponding to all the moments can be flexibly determined, so that the auxiliary picture can display effective pictures as far as possible, the efficiency of transmitting information beneficial to user operation by the auxiliary picture is improved, and the interaction efficiency of the user in controlling the virtual vehicles is improved.)

1. A method for displaying pictures in a virtual scene is characterized by comprising the following steps:

displaying a virtual scene picture, wherein the virtual scene picture comprises a first virtual vehicle;

determining a target virtual vehicle from the second virtual vehicles based on a relative distance between the first virtual vehicle and at least one second virtual vehicle; the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle;

displaying an auxiliary picture in the virtual scene picture; the auxiliary picture is a picture which takes the target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle.

2. The method of claim 1, wherein determining a target virtual vehicle from the second virtual vehicles based on a relative distance between the first virtual vehicle and at least one second virtual vehicle comprises:

acquiring a virtual vehicle to be selected; the candidate virtual vehicle is the second virtual vehicle, the relative distance between the candidate virtual vehicle and the first virtual vehicle is smaller than or equal to a first distance;

and determining the target virtual vehicle from the candidate virtual vehicles.

3. The method of claim 2, wherein the determining the target virtual vehicle from the candidate virtual vehicles comprises:

and determining the candidate virtual vehicle with the minimum relative distance with the first virtual vehicle as the target virtual vehicle.

4. The method of claim 1, wherein determining a target virtual vehicle from the second virtual vehicles based on a relative distance between the first virtual vehicle and at least one second virtual vehicle comprises:

acquiring the second virtual vehicle with the smallest relative distance with the first virtual vehicle;

determining the second virtual vehicle as the target virtual vehicle in response to the relative distance between the second virtual vehicle and the first virtual vehicle being less than or equal to a first distance.

5. The method of claim 1, wherein prior to determining a target virtual vehicle from the second virtual vehicles based on the relative distance between the first virtual vehicle and at least one second virtual vehicle, further comprising:

obtaining the relative distance between the first virtual vehicle and the second virtual vehicle.

6. The method of claim 5, wherein the obtaining the relative distance between the first virtual vehicle and the second virtual vehicle comprises:

determining the length of a connecting line between the tail of the first virtual vehicle and the central point of the second virtual vehicle as the relative distance.

7. The method of claim 1, further comprising:

in response to the auxiliary picture being displayed in the virtual scene picture, starting a picture display timer; the picture display timer is used for recording the continuous display duration of the auxiliary picture in the virtual scene picture;

and responding to the time length corresponding to the picture display timer reaching a first time length, and finishing displaying the auxiliary picture.

8. The method according to claim 7, wherein before ending the displaying of the auxiliary screen in response to the time duration corresponding to the screen displaying timing reaching the first time duration, further comprising:

resetting the screen presentation timer in response to the target virtual vehicle switching from a first target virtual vehicle to a second target virtual vehicle during the auxiliary screen presentation; the first target virtual vehicle and the second target virtual vehicle are any two of the at least one second virtual vehicle.

9. The method of claim 1, wherein the virtual camera is located diagonally above the first virtual vehicle and the virtual camera moves with the first virtual vehicle;

before the displaying of the auxiliary picture in the virtual scene picture, the method further comprises:

acquiring a first obtuse angle between the direction of a target lens and a vehicle tail reference line; the target lens direction is a direction pointing from the virtual camera to a center point of the target virtual vehicle; the vehicle tail reference line is a straight line where the vehicle tail of the first virtual vehicle is located, and is parallel to the horizontal plane and perpendicular to a connecting line of the vehicle head and the vehicle tail of the first virtual vehicle;

determining a first lens direction of the virtual camera based on the position of the target virtual vehicle at the current moment in response to the first obtuse angle being less than or equal to a first angle; the first lens direction is the target lens direction.

10. The method of claim 9, further comprising:

determining a second lens orientation of the virtual camera in response to the first obtuse angle being greater than the first angle; the second lens direction points to a position between the target lens direction and the vehicle tail pointing direction, and a second obtuse angle between the second lens direction and the vehicle tail reference line is the first angle; the vehicle rear pointing direction is a direction pointing from a head of the first virtual vehicle to a rear of the vehicle.

11. The method of claim 10, further comprising:

and in response to that the duration time of the first obtuse angle which is larger than the first angle reaches a second duration, ending displaying the auxiliary picture.

12. The method according to claim 1, wherein said presenting an auxiliary picture in said virtual scene picture comprises:

maintaining a lens orientation of the virtual camera in response to the relative distance between the target virtual vehicle and the first virtual vehicle being less than or equal to a second distance;

and displaying the auxiliary picture in the lens direction shot by the virtual camera on the virtual scene picture.

13. The method according to any one of claims 1 to 10, wherein said displaying an auxiliary picture in said virtual scene picture comprises:

and in response to the display time of the virtual scene picture being greater than a third duration, displaying the auxiliary picture in the virtual scene picture.

14. A method for displaying pictures in a virtual scene is characterized by comprising the following steps:

displaying a virtual scene picture, wherein the virtual scene picture comprises a first virtual vehicle;

displaying a first auxiliary picture in the virtual scene picture; the first auxiliary picture is a picture which takes a first target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle; the first target virtual vehicle is a virtual vehicle having a smallest relative distance to the first virtual vehicle, and the relative distance is smaller than or equal to a first distance;

and in response to the fact that the relative distance between the virtual vehicle and the first virtual vehicle is the minimum and the virtual vehicle of which the relative distance is smaller than or equal to the first distance is transformed into a second target virtual vehicle, displaying a second auxiliary picture in the virtual scene picture, wherein the second auxiliary picture is a picture which is shot by the virtual camera arranged corresponding to the first virtual vehicle and takes the second target virtual vehicle as a focus.

15. An apparatus for displaying pictures in a virtual scene, the apparatus comprising:

the home picture display module is used for displaying a virtual scene picture, and the virtual scene picture comprises a first virtual vehicle;

a target determination module to determine a target virtual vehicle from among the at least one second virtual vehicle based on a relative distance between the first virtual vehicle and the second virtual vehicle; the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle;

the auxiliary picture display module is used for displaying an auxiliary picture in the virtual scene picture; the auxiliary picture is a picture which takes the target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle.

16. A computer device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method for displaying pictures in a virtual scene according to any one of claims 1 to 14.

17. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the method for displaying pictures in a virtual scene according to any one of claims 1 to 14.

Technical Field

The present application relates to the field of virtual scene technologies, and in particular, to a method and an apparatus for displaying a picture in a virtual scene, a computer device, and a storage medium.

Background

Currently, in game-like applications that manipulate virtual vehicles, such as racing games, a user may simulate the function of a rear view mirror that actually drives a vehicle in a game interface.

In the related art, a rearview mirror function control is superimposed on a virtual scene picture, and the virtual scene picture displayed on a display screen of a terminal can be directly switched to a viewing angle behind a main control virtual vehicle by receiving a trigger operation of a user on the rearview mirror function control.

However, in the related art, by triggering the control to directly display the virtual scene picture of the rear view in a full screen manner, a situation that the user cannot observe the picture in front of the virtual vehicle occurs, and interaction efficiency when the user controls the virtual vehicle is affected.

Disclosure of Invention

The embodiment of the application provides a method and a device for displaying pictures in a virtual scene, computer equipment and a storage medium, and the technical scheme is as follows:

in one aspect, a method for displaying pictures in a virtual scene is provided, the method comprising:

displaying a virtual scene picture, wherein the virtual scene picture comprises a first virtual vehicle;

determining a target virtual vehicle from the second virtual vehicles based on a relative distance between the first virtual vehicle and at least one second virtual vehicle; the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle;

displaying an auxiliary picture in the virtual scene picture; the auxiliary picture is a picture which takes the target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle.

In one aspect, a method for displaying pictures in a virtual scene is provided, the method comprising:

displaying a virtual scene picture, wherein the virtual scene picture comprises a first virtual vehicle;

displaying a first auxiliary picture in the virtual scene picture; the first auxiliary picture is a picture which takes a first target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle; the first target virtual vehicle is a virtual vehicle having a smallest relative distance to the first virtual vehicle, and the relative distance is smaller than or equal to a first distance;

and in response to the fact that the relative distance between the virtual vehicle and the first virtual vehicle is the minimum and the virtual vehicle of which the relative distance is smaller than or equal to the first distance is transformed into a second target virtual vehicle, displaying a second auxiliary picture in the virtual scene picture, wherein the second auxiliary picture is a picture which is shot by the virtual camera arranged corresponding to the first virtual vehicle and takes the second target virtual vehicle as a focus.

In one aspect, an apparatus for displaying a picture in a virtual scene is provided, the apparatus comprising:

the home picture display module is used for displaying a virtual scene picture, and the virtual scene picture comprises a first virtual vehicle;

a target determination module to determine a target virtual vehicle from among the at least one second virtual vehicle based on a relative distance between the first virtual vehicle and the second virtual vehicle; the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle;

the auxiliary picture display module is used for displaying an auxiliary picture in the virtual scene picture; the auxiliary picture is a picture which takes the target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle.

In one possible implementation, the goal determining module includes:

the candidate obtaining submodule is used for obtaining a virtual vehicle to be selected; the candidate virtual vehicle is the second virtual vehicle, the relative distance between the candidate virtual vehicle and the first virtual vehicle is smaller than or equal to a first distance;

and the first target determination submodule is used for determining the target virtual vehicle from the virtual vehicles to be selected.

In one possible implementation, the goal determining sub-module includes:

and the target determining unit is used for determining the virtual vehicle to be selected with the smallest relative distance with the first virtual vehicle as the target virtual vehicle.

In one possible implementation, the goal determining module includes:

a first obtaining sub-module configured to obtain the second virtual vehicle whose relative distance to the first virtual vehicle is smallest;

a second target determination submodule to determine the second virtual vehicle as the target virtual vehicle in response to the relative distance between the second virtual vehicle and the first virtual vehicle being less than or equal to a first distance.

In one possible implementation, the apparatus further includes:

a distance acquisition module to acquire a relative distance between the first virtual vehicle and at least one second virtual vehicle before determining a target virtual vehicle from the second virtual vehicles based on the relative distance between the first virtual vehicle and the second virtual vehicle.

In a possible implementation manner, the distance obtaining module includes:

and the distance acquisition submodule is used for determining the length of a connecting line between the tail of the first virtual vehicle and the central point of the second virtual vehicle as the relative distance.

In one possible implementation, the apparatus further includes:

a timing module for starting a picture presentation timer in response to the auxiliary picture being presented in the virtual scene picture; the picture display timer is used for recording the continuous display duration of the auxiliary picture in the virtual scene picture;

and the first picture ending module is used for responding to the time length corresponding to the picture display timer reaching a first time length and ending the display of the auxiliary picture.

In one possible implementation, the apparatus further includes:

the timing resetting module is used for responding to that the time length corresponding to the picture display timing reaches a first time length and resetting the picture display timer in response to that the target virtual vehicle is switched from a first target virtual vehicle to a second target virtual vehicle in the auxiliary picture display process before the auxiliary picture is displayed; the first target virtual vehicle and the second target virtual vehicle are any two of the at least one second virtual vehicle.

In one possible implementation, the virtual camera is located diagonally above the first virtual vehicle, and the virtual camera moves with the first virtual vehicle;

the device further comprises:

the obtuse angle acquisition module is used for acquiring a first obtuse angle between the direction of a target lens and a vehicle tail reference line before an auxiliary picture is displayed in the virtual scene picture; the target lens direction is a direction pointing from the virtual camera to a center point of the target virtual vehicle; the vehicle tail reference line is a straight line where the vehicle tail of the first virtual vehicle is located, and is parallel to the horizontal plane and perpendicular to a connecting line of the vehicle head and the vehicle tail of the first virtual vehicle;

a first direction determination module, configured to determine a first lens direction of the virtual camera based on a position of the target virtual vehicle at a current time in response to the first obtuse angle being less than or equal to a first angle; the first lens direction is the target lens direction.

In one possible implementation, the apparatus further includes:

a second direction determination module to determine a second lens direction of the virtual camera in response to the first obtuse angle being greater than the first angle; the second lens direction points to a position between the target lens direction and the vehicle tail pointing direction, and a second obtuse angle between the second lens direction and the vehicle tail reference line is the first angle; the vehicle rear pointing direction is a direction pointing from a head of the first virtual vehicle to a rear of the vehicle.

In one possible implementation, the apparatus further includes:

and the second picture ending module is used for responding to the fact that the duration time of the first obtuse angle between the target lens direction and the vehicle tail reference line is greater than the first angle and reaches a second duration time, and ending displaying of the auxiliary picture.

In one possible implementation manner, the auxiliary screen display module includes:

a direction determination submodule for maintaining a lens direction of the virtual camera in response to the relative distance between the target virtual vehicle and the first virtual vehicle being less than or equal to a second distance;

and the picture shooting submodule is used for displaying the auxiliary picture shot by the virtual camera in the lens direction on the virtual scene picture.

In one possible implementation manner, the auxiliary screen display module includes:

and the auxiliary picture display submodule is used for responding that the display time of the virtual scene picture is longer than a third duration, and displaying the auxiliary picture in the virtual scene picture.

In one aspect, an apparatus for displaying a picture in a virtual scene is provided, the apparatus comprising:

the home picture display module is used for displaying a virtual scene picture, and the virtual scene picture comprises a first virtual vehicle;

the first auxiliary picture display module is used for displaying a first auxiliary picture in the virtual scene picture; the first auxiliary picture is a picture which takes a first target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle; the first target virtual vehicle is a virtual vehicle having a smallest relative distance to the first virtual vehicle, and the relative distance is smaller than or equal to a first distance;

and the second auxiliary picture display module is used for displaying a second auxiliary picture in the virtual scene picture in response to the fact that the relative distance between the second auxiliary picture and the first virtual vehicle is the minimum and the virtual vehicle of which the relative distance is less than or equal to the first distance is transformed into a second target virtual vehicle, wherein the second auxiliary picture is a picture which is shot by the virtual camera corresponding to the first virtual vehicle and takes the second target virtual vehicle as a focus.

In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method for displaying pictures in a virtual scene.

In still another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the method for displaying pictures in a virtual scene.

According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the method for displaying the picture in the virtual scene provided in the various optional implementation modes of the above aspects.

The technical scheme provided by the embodiment of the application has the beneficial effects that at least:

the method comprises the steps of determining a target virtual vehicle in the first virtual vehicle by detecting the relative distance between the first virtual vehicle and the second virtual vehicle in real time, shooting a virtual scene by taking the target virtual vehicle as a focus through a virtual camera, and displaying a shot auxiliary picture. Due to the fact that the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the target virtual vehicle corresponding to each moment can be determined flexibly through the scheme, and the auxiliary picture taking the target virtual vehicle as a focus is displayed, so that the auxiliary picture can display an effective picture as much as possible, the efficiency of the auxiliary picture for transmitting information beneficial to user operation is improved, the user can observe effective picture content behind the vehicle while normally observing the picture in front of the virtual vehicle, and the interaction efficiency of the user in controlling the virtual vehicle is improved.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.

FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;

FIG. 2 is a schematic illustration of a display interface of a virtual scene provided by an exemplary embodiment of the present application;

FIG. 3 is a flowchart of a method for displaying a picture in a virtual scene according to an exemplary embodiment of the present application;

FIG. 4 is a flowchart of a method for displaying a picture in a virtual scene according to an exemplary embodiment of the present application;

FIG. 5 is a flowchart of a method for displaying a picture in a virtual scene according to an exemplary embodiment of the present application;

fig. 6 is a schematic diagram of a setting position of a virtual camera for taking an auxiliary picture according to the embodiment shown in fig. 5;

FIG. 7 is a schematic diagram illustrating an obtuse angle determination process between the target lens direction and the vehicle tail reference line according to the embodiment shown in FIG. 5;

FIG. 8 is a schematic diagram of a lens orientation determination process according to the embodiment shown in FIG. 5;

FIG. 9 is a schematic diagram illustrating focus switching corresponding to an auxiliary screen according to the embodiment shown in FIG. 5;

FIG. 10 is a schematic diagram of an auxiliary image when a first obtuse angle between the target lens direction and the vehicle tail reference line is greater than a first angle according to the embodiment shown in FIG. 5;

FIG. 11 is a logic flow diagram of a method for presenting frames in a virtual scene in accordance with an exemplary embodiment of the present application;

fig. 12 is a block diagram illustrating a structure of a picture displaying apparatus in a virtual scene according to an exemplary embodiment of the present application;

fig. 13 is a block diagram illustrating a structure of a picture displaying apparatus in a virtual scene according to an exemplary embodiment of the present application;

fig. 14 is a block diagram of a computer device according to an exemplary embodiment of the present application.

Detailed Description

Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.

It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.

For ease of understanding, several terms referred to in this application are explained below.

1) Virtual scene

A virtual scene is a virtual scene that is displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene may also be used for virtual scene engagement between at least two virtual characters. Optionally, the virtual scene may also be used for a virtual firearm fight between at least two virtual characters. Optionally, the virtual scene may also be used for fighting between at least two virtual characters using a virtual firearm within a target area that may be continually smaller over time in the virtual scene.

Virtual scenes are typically rendered based on hardware (e.g., a screen) in a terminal generated by an application in a computer device, such as a terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a personal computer device such as a notebook computer or a stationary computer.

2) Virtual object

A virtual object refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Optionally, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape, volume and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.

3) Virtual vehicle

The virtual vehicle is a virtual vehicle which can realize running operation of a virtual object in a virtual environment according to control of an operation control by a user, the functions which can be realized by the virtual vehicle can comprise acceleration, deceleration, braking, backing, steering, drifting, prop use and the like, and the functions can be automatically realized, for example, the virtual vehicle can automatically accelerate, or the virtual vehicle can automatically steer; the functions can also be realized according to control triggering of the user on the operation control, for example, when the user triggers the brake control, the virtual vehicle executes a brake action.

4) Racing car game

The racing game is mainly carried out in a virtual competition scene, a plurality of virtual vehicles realize the racing game aiming at realizing the appointed competition target, and in the virtual competition scene, a user can control the virtual vehicle corresponding to the terminal to carry out racing competition with the virtual vehicles controlled by other users; the user may also control the virtual vehicle corresponding to the terminal to race against the virtual vehicle controlled by the AI generated by the client program corresponding to the racing game.

FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application. The implementation environment may include: a first terminal 110, a server 120, and a second terminal 130.

The first terminal 110 is installed and operated with an application 111 supporting a virtual environment, and the application 111 may be a multiplayer online battle program, or the application 111 may be an offline application. When the first terminal runs the application 111, a user interface of the application 111 is displayed on the screen of the first terminal 110. The application 111 may be an RCG (racing game), a Sandbox-like game containing racing functionality, or other types of games containing racing functionality. In the present embodiment, the application 111 is exemplified as an RCG. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual vehicle located in the virtual environment for activity, and the first virtual vehicle may be referred to as a master virtual object of the first user 112. The activities of the first virtual vehicle include, but are not limited to: at least one of acceleration, deceleration, braking, backing, steering, drifting, and using props, etc. Illustratively, the first virtual vehicle may be a virtual vehicle, or a virtual model with virtual vehicle functions modeled from other vehicles (e.g., ships, airplanes), etc.; the first virtual vehicle may also be a virtual vehicle modeled from a real vehicle model that is present in reality.

The second terminal 130 is installed and operated with an application 131 supporting a virtual environment, and the application 131 may be a multiplayer online battle program. When the second terminal 130 runs the application 131, a user interface of the application 131 is displayed on the screen of the second terminal 130. The client may be any one of an RCG game program, a Sandbox game, and other game programs including a racing function, and in the present embodiment, the application 131 is an RCG game as an example.

Alternatively, the second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual vehicle located in the virtual environment to implement the running operation, and the second virtual vehicle may be referred to as a master virtual vehicle of the second user 132.

Optionally, a third virtual vehicle may also exist in the virtual environment, the third virtual vehicle being controlled by the AI corresponding to the application 131, and the third virtual vehicle may be referred to as an AI control virtual vehicle.

Optionally, the first virtual vehicle, the second virtual vehicle, and the third virtual vehicle are in the same virtual world. Optionally, the first virtual vehicle and the second virtual vehicle may belong to the same camp, the same team, the same organization, have a friend relationship, or have a temporary communication right. Alternatively, the first virtual vehicle and the second virtual vehicle may belong to different camps, different teams, different organizations, or have a hostile relationship.

Optionally, the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.

Only two terminals are shown in fig. 1, but there are a plurality of other terminals that may access the server 120 in different embodiments. Optionally, one or more terminals are terminals corresponding to the developer, a development and editing platform for supporting the application program in the virtual environment is installed on the terminal, the developer can edit and update the application program on the terminal and transmit the updated application program installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the application program installation package from the server 120 to update the application program.

The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.

The server 120 includes at least one of a server, a server cluster composed of a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used to provide background services for applications that support a three-dimensional virtual environment. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.

In one illustrative example, the server 120 includes a memory 121, a processor 122, a user account database 123, a combat services module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 120, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of a user account used by the first terminal 110, the second terminal 130, and other terminals, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the users to fight, such as 1V1 fight, 3V3 fight, 5V5 fight and the like; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data.

The virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene. Taking the example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which shows a schematic view of a display interface of the virtual scene according to an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a currently controlled virtual vehicle 210, an environment screen 220 of the three-dimensional virtual scene, and a virtual vehicle 240. The virtual vehicle 240 may be a virtual object controlled by a user or a virtual object controlled by an application corresponding to other terminals.

In fig. 2, the currently controlled virtual vehicle 210 and the virtual vehicle 240 are three-dimensional models in a three-dimensional virtual scene, and the environment picture of the three-dimensional virtual scene displayed in the scene picture 200 is an object observed from a third person perspective view angle corresponding to the currently controlled virtual vehicle 210, where the third person perspective view angle corresponding to the virtual vehicle 210 is a view angle picture observed from a virtual camera disposed at the rear upper side of the virtual vehicle, and exemplarily, as shown in fig. 2, the environment picture 220 of the three-dimensional virtual scene displayed under the observation of the third person perspective view angle corresponding to the currently controlled virtual vehicle 210 is a road 224, a sky 225, a hill 221, and a factory building 222.

The currently controlled virtual vehicle 210 may perform operations such as steering, acceleration, drifting and the like under the control of the user, and the virtual vehicle in the virtual scene may exhibit different three-dimensional models under the control of the user, for example, a screen of the terminal supports a touch operation, and the scene screen 200 of the virtual scene includes a virtual control, so that when the user touches the virtual control, the currently controlled virtual vehicle 210 may perform a specified operation (for example, a deformation operation) in the virtual scene and exhibit a currently corresponding three-dimensional model.

Fig. 3 shows a flowchart of a method for displaying a picture in a virtual scene according to an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 3, the method for displaying pictures in a virtual scene includes:

step 301, displaying a virtual scene picture, wherein the virtual scene picture includes a first virtual vehicle.

Step 302, determining a target virtual vehicle from the second virtual vehicles based on the relative distance between the first virtual vehicle and at least one second virtual vehicle; the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle.

Step 303, displaying an auxiliary picture in the virtual scene picture; the auxiliary picture is a picture which takes the target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle.

In summary, according to the scheme shown in the application, the target virtual vehicle is determined by detecting the relative distance between the first virtual vehicle and the second virtual vehicle in real time, the virtual camera shoots the virtual scene with the target virtual vehicle as a focus, and the shot auxiliary picture is displayed. Due to the fact that the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the target virtual vehicle corresponding to each moment can be determined flexibly through the scheme, and the auxiliary picture taking the target virtual vehicle as a focus is displayed, so that the auxiliary picture can display an effective picture as much as possible, the efficiency of the auxiliary picture for transmitting information beneficial to user operation is improved, the user can observe effective picture content behind the vehicle while normally observing the picture in front of the virtual vehicle, and the interaction efficiency of the user in controlling the virtual vehicle is improved.

Fig. 4 shows a flowchart of a method for displaying a picture in a virtual scene according to an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 4, the method for displaying pictures in a virtual scene includes:

step 401, displaying a virtual scene picture, where the virtual scene picture includes a first virtual vehicle.

Step 402, displaying a first auxiliary picture in a virtual scene picture; the first auxiliary picture is a picture which takes a first target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle; the first target virtual vehicle is a virtual vehicle having a smallest relative distance to the first virtual vehicle, and the relative distance is equal to or less than the first distance.

And 403, in response to that the virtual vehicle which has the smallest relative distance with the first virtual vehicle and the relative distance of which is less than or equal to the first distance is converted into a second target virtual vehicle, displaying a second auxiliary picture in the virtual scene picture, wherein the second auxiliary picture is a picture which is shot by a virtual camera arranged corresponding to the first virtual vehicle and takes the second target virtual vehicle as a focus.

In summary, according to the scheme shown in the application, the target virtual vehicle is determined by detecting the relative distance between the first virtual vehicle and the second virtual vehicle in real time, the virtual camera shoots the virtual scene with the target virtual vehicle as a focus, and the shot auxiliary picture is displayed. Due to the fact that the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the target virtual vehicle corresponding to each moment can be determined flexibly through the scheme, and the auxiliary picture taking the target virtual vehicle as a focus is displayed, so that the auxiliary picture can display an effective picture as much as possible, the efficiency of the auxiliary picture for transmitting information beneficial to user operation is improved, the user can observe effective picture content behind the vehicle while normally observing the picture in front of the virtual vehicle, and the interaction efficiency of the user in controlling the virtual vehicle is improved.

Fig. 5 is a flowchart illustrating a method for displaying a picture in a virtual scene according to an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 5, taking the computer device as a terminal as an example, the terminal may present the auxiliary screen on the virtual scene screen by performing the following steps.

Step 501, displaying a virtual scene picture.

In the embodiment of the application, the terminal displays a virtual scene picture containing a first virtual vehicle.

The virtual scene picture can be a virtual scene which contains the first virtual vehicle and is in racing competition with other virtual vehicles. The first virtual vehicle is a virtual vehicle controlled by the terminal, and the other virtual vehicles may be other terminal-controlled or AI-controlled virtual vehicles.

In one possible implementation, the virtual scene is a virtual scene viewed from a third personal perspective of the first virtual vehicle. The third person weighing visual angle of the first virtual vehicle is a visual angle corresponding to the main picture virtual camera arranged at the rear upper part of the first virtual vehicle, and the virtual scene picture observed by the third person weighing visual angle of the first virtual vehicle is a virtual scene picture observed by the main picture virtual camera arranged at the rear upper part of the first virtual vehicle.

Or the virtual scene picture is a virtual scene picture observed from a first-person perspective of the first virtual vehicle. The first person perspective of the first virtual vehicle is a perspective corresponding to the main picture virtual camera arranged at the driver position of the first virtual vehicle, and the virtual scene picture observed at the first person perspective of the first virtual vehicle is a virtual scene picture observed at the main picture virtual camera arranged at the driver position of the first virtual vehicle.

In a possible implementation manner, a virtual scene picture is displayed in a display area of the terminal in a full manner, the virtual scene picture is a main display picture for controlling a first virtual vehicle to perform a racing game in a virtual scene, and is used for displaying a path picture of the first virtual vehicle in the racing game process, and a user controls the first virtual vehicle by acquiring the path picture in front.

Wherein, control or display information is superposed on the virtual scene picture.

For example, the controls may include a direction control for receiving a trigger operation to control a moving direction of the first virtual vehicle, a brake control for receiving a trigger operation to control the first virtual vehicle to brake, an acceleration control for controlling the first virtual vehicle to move in an accelerated manner, and the like. The display information may include account id identifiers corresponding to the first virtual vehicle and other virtual vehicles, ranking information of the sequence of the positions of the virtual vehicles at the current time, a map used for indicating a complete virtual scene, map information of the positions of the virtual vehicles on the map, and the like.

In one possible implementation manner, a perspective switching control is superimposed on the virtual scene picture, and in response to a user's specified operation on the perspective switching control, the virtual scene picture can be switched between a first person perspective of the first virtual vehicle and a third person perspective of the first virtual vehicle.

For example, when the virtual scene picture displayed by the terminal is a virtual scene picture corresponding to the first-person perspective of the first virtual vehicle, in response to the user's specifying operation of the perspective switching control, the terminal switches the virtual scene picture corresponding to the first-person perspective of the first virtual vehicle to a virtual scene picture corresponding to the third-person perspective; when the virtual scene picture displayed by the terminal is the virtual scene picture corresponding to the third person-name visual angle corresponding to the first virtual vehicle, the terminal switches the virtual scene picture corresponding to the third person-name visual angle of the first virtual vehicle into the virtual scene picture corresponding to the first person-name visual angle in response to the specified operation of the visual angle switching control by the user.

In a possible implementation manner, the virtual vehicles corresponding to the same user account may be a plurality of virtual vehicles of different types, and the terminal displays, on the vehicle selection interface, the virtual vehicles of different types corresponding to the virtual vehicle information in response to receiving the virtual vehicle information issued by the server. And in response to receiving the selection operation of the user on the vehicle selection interface, determining a target virtual vehicle corresponding to the selection operation, and determining the target virtual vehicle as a first virtual vehicle.

Step 502, a relative distance between the first virtual vehicle and the second virtual vehicle is obtained.

In the embodiment of the present application, the terminal acquires the relative distance between the first virtual vehicle and each of the second virtual vehicles, which are virtual vehicles located behind the first virtual vehicle.

In one possible implementation, by acquiring a rear reference line of a first virtual vehicle, an area that does not exceed the rear reference line is determined as being rearward of the first virtual vehicle, and each virtual vehicle located rearward of the first virtual vehicle is determined as a second virtual vehicle.

The vehicle tail reference line of the first virtual vehicle is a straight line where the vehicle tail of the first virtual vehicle is located, is parallel to a horizontal plane in the virtual scene, and is perpendicular to a connecting line of the vehicle head and the vehicle tail of the first virtual vehicle.

In one possible implementation, a length of a connection line between a rear of the first virtual vehicle and a center point of the second virtual vehicle is determined as the relative distance.

Wherein the center point of the second virtual vehicle may be the center of gravity of the virtual vehicle. And the calculated relative distance refers to the distance in the virtual scene.

Step 503, determining a target virtual vehicle from the second virtual vehicles based on the relative distance between the first virtual vehicle and the at least one second virtual vehicle.

In the embodiment of the application, the terminal judges whether the relative distance meets a specified condition or not based on the determined relative distance between each second virtual vehicle and the first virtual vehicle, when the relative distance meets the specified condition, the second virtual vehicle corresponding to the relative distance is determined to be the target virtual vehicle, and if the relative distance does not meet the specified condition, the target virtual vehicle does not exist at the current moment.

In one possible implementation, only one target virtual vehicle, or no target virtual vehicles, are present at the same time.

In one possible implementation, a second virtual vehicle that satisfies both that a relative distance to the first virtual vehicle is equal to or less than the first distance and that the relative distance is the smallest is determined as the target virtual vehicle.

1) Firstly, virtual vehicles to be selected are obtained, and a target virtual vehicle is determined from the virtual vehicles to be selected. And then, determining the candidate virtual vehicle with the smallest relative distance with the first virtual vehicle as the target virtual vehicle.

The candidate virtual vehicle is a second virtual vehicle, and the relative distance between the candidate virtual vehicle and the first virtual vehicle is smaller than or equal to the first distance. The candidate virtual vehicles may be a set of partial second virtual vehicles that satisfy a condition that the relative distance is equal to or less than the first distance.

For example, if the first distance is 100 meters, when it is acquired that a second virtual vehicle behind a first virtual vehicle in the virtual scene, the relative distance of which to the first virtual vehicle is less than or equal to 100 meters, has a virtual vehicle a, a virtual vehicle b, and a virtual vehicle c, the virtual vehicle a, the virtual vehicle b, and the virtual vehicle c are acquired as virtual vehicles to be selected. And comparing the relative distances between the three virtual vehicles to be selected and the first virtual vehicle, and if the relative distance corresponding to the virtual vehicle A is 60 meters, the relative distance corresponding to the virtual vehicle B is 30 meters, and the relative distance corresponding to the virtual vehicle C is 100 meters, determining the virtual vehicle B with the minimum relative distance as the target virtual vehicle.

2) First, a second virtual vehicle whose relative distance from the first virtual vehicle is smallest is acquired, and then, in response to the relative distance between the second virtual vehicle and the first virtual vehicle being equal to or less than the first distance, the second virtual vehicle is determined as a target virtual vehicle.

For example, when a virtual vehicle a, a virtual vehicle b and a virtual vehicle c are obtained behind a first virtual vehicle in a virtual scene, the relative distances between the virtual vehicle a, the virtual vehicle b and the virtual vehicle c and the first virtual vehicle are obtained, the relative distances between three virtual vehicles to be selected and the first virtual vehicle are compared, if the relative distance corresponding to the virtual vehicle a is 105 meters, the relative distance corresponding to the virtual vehicle b is 110 meters, and the relative distance corresponding to the virtual vehicle c is 120 meters, the virtual vehicle with the smallest relative distance is determined to be the virtual vehicle a, and then whether the virtual vehicle a is smaller than or equal to the first distance is determined, if the first distance is 100 meters, it is determined that the virtual vehicle a does not conform to the target virtual vehicle at the current time. If the first distance is 105 meters, judging that the virtual vehicle A meets the condition that the first distance is less than or equal to the first distance, and determining that the target virtual vehicle at the current moment is the virtual vehicle A.

In step 504, a first obtuse angle between the direction of the target lens and a vehicle tail reference line is obtained.

In the embodiment of the present application, a virtual camera for taking an auxiliary screen is provided obliquely above the first virtual vehicle, and the virtual camera moves along with the first virtual vehicle. The terminal obtains a first obtuse angle between the target lens direction corresponding to the virtual camera and a vehicle tail reference line of the first virtual vehicle.

The target lens direction is a direction pointing to the central point of the target virtual vehicle from the virtual camera, the vehicle tail reference line is a straight line where the vehicle tail of the first virtual vehicle is located, and the vehicle tail reference line is parallel to the horizontal plane and is perpendicular to a connecting line of the vehicle head and the vehicle tail of the first virtual vehicle.

For example, fig. 6 is a schematic diagram of a setting position of a virtual camera for capturing an auxiliary picture according to an embodiment of the present application. As shown in fig. 6, when there is the first virtual vehicle 621 and the target virtual vehicle 631 in the virtual scene, it can be determined that the virtual camera 611 is positioned right in front of the first virtual vehicle 621 from the top view. If there is a first virtual vehicle 622 and a target virtual vehicle 632 in the virtual scene, it can be determined from the side view that the virtual camera 612 is located in front of and above the first virtual vehicle 622.

Wherein the virtual camera may also be located at the top left of the first virtual object.

For example, fig. 7 is a schematic diagram illustrating an obtuse angle determining process between a target lens direction and a vehicle tail reference line according to an embodiment of the present application. As shown in fig. 7, a relative distance 76 between a first virtual vehicle 72 and a second virtual vehicle 73 is determined by connecting the rear end of the first virtual vehicle 72 with a central point of the second virtual vehicle 73, the second virtual vehicle is determined to be a target virtual vehicle 73 by judging the relative distance, a straight line parallel to a horizontal plane and perpendicular to a connecting line of the rear ends of the vehicle heads is made through the rear end of the first virtual vehicle 72, the straight line is acquired as a rear reference line 75 of the first virtual vehicle 72, and then the connecting line is made through the virtual camera 71 to the central point of the target virtual vehicle 73, the pointing direction of the connecting line is taken as a target lens direction 74 of the virtual camera 71, wherein the target lens direction 74 and the rear reference line 75 of the first virtual vehicle 72 intersect to form four included angles including two acute angles and two obtuse angles, and the angles of the two acute angles are the same, the two obtuse angles have the same angle, and the first obtuse angle 77 is obtained.

Step 505, in response to the first obtuse angle being less than or equal to the first angle, a first lens direction of the virtual camera is determined based on the position of the target virtual vehicle.

In the embodiment of the application, in response to that the first obtuse angle acquired by the terminal is smaller than or equal to the first angle, a first lens direction of the virtual camera is determined based on the position of the target virtual vehicle, where the first lens direction is a lens direction in which the virtual camera actually shoots a virtual scene.

In one possible implementation manner, the first lens direction is determined as the target lens direction in response to that the first obtuse angle acquired by the terminal is smaller than or equal to the first angle.

For example, fig. 8 is a schematic diagram of a lens direction determination process according to an embodiment of the present application. As shown in fig. 8, if the first angle 83 is 165 degrees, when the target virtual vehicle moves to the position shown by the dotted line in the figure, the angle of the first obtuse angle formed by the intersection between the target lens direction and the rear reference line of the first virtual vehicle 82 is 165 degrees, when the target virtual vehicle 86 is in the position shown in the figure, the first obtuse angle 84 formed by the intersection between the target lens direction and the rear reference line of the first virtual vehicle 82 is, and when the first obtuse angle 84 is compared with the first angle 83, the first obtuse angle 84 can be determined to be smaller than the first angle 83, so that the first lens direction is determined to be the target lens direction.

In response to the first obtuse angle being greater than the first angle, a second lens orientation of the virtual camera is determined, step 506.

In this embodiment of the application, if the first obtuse angle acquired by the response terminal is greater than the first angle, it is determined that the second lens direction of the virtual camera is the lens direction in which the virtual camera actually shoots the virtual scene.

In one possible implementation, the second lens direction points between the target lens direction and the vehicle tail pointing direction, and a second obtuse angle between the second lens direction and the vehicle tail reference line is a first angle, and the vehicle tail pointing direction is a direction pointing from the vehicle head to the vehicle tail of the first virtual vehicle.

For example, as shown in fig. 8, if the first angle 83 is 165 degrees, when the target virtual vehicle moves to the position shown by the dotted line in the figure, the angle of the second obtuse angle formed by the intersection between the target lens direction and the reference line of the tail of the first virtual vehicle 82 is 165 degrees, when the target virtual vehicle 87 is in the position shown in the figure, the intersection between the target lens direction and the reference line of the tail of the first virtual vehicle 82 is the first obtuse angle 85, and when the first obtuse angle 85 is compared with the first angle 83, the first obtuse angle 85 can be determined to be greater than the first angle 83, so that the first lens direction is determined as the corresponding target lens direction when the second obtuse angle is the first angle.

In step 507, in response to the auxiliary picture being displayed in the virtual scene picture, a picture display timer is started.

In the embodiment of the present application, the frame display timer is started at the same time when the auxiliary frame starts to be displayed in the virtual scene frame.

The picture display timer is used for recording the duration of continuous display of the auxiliary picture in the virtual scene picture, or the picture display timer can also be used for recording the duration of display of the auxiliary picture with the same focus.

In one possible implementation manner, in response to determining that the target virtual vehicle exists in the second virtual vehicle, the display of the auxiliary picture in the virtual scene picture is started, and meanwhile, the timing function of the picture display timer is started, and the display duration of the auxiliary picture is recorded.

The auxiliary picture can be displayed in any area on the virtual scene picture, the size of the auxiliary picture can be adjusted, and a user can perform custom setting or selection on the position of the auxiliary picture on the virtual scene picture and the size of the auxiliary picture display on the picture setting interface.

In one possible implementation manner, in response to the existence of the target virtual vehicle, the screen display timer is used for timing, and if the terminal receives feedback that the target virtual vehicle does not exist at the moment, the timing function of the screen display timer is ended, and the screen display timer is reset.

For example, if a terminal determines a target virtual vehicle through calculation at a certain time, a screen display timer starts to be started and performs timing, when the duration of the target virtual vehicle is continuously determined to be 3 seconds, the duration of the timing performed by the screen display timer is 3 seconds, if 5 seconds pass, the target virtual vehicle exceeds the first virtual vehicle, and it is not determined through calculation that other virtual vehicle conforming conditions exist as the target virtual vehicle, the timing function of the screen display timer is ended, and the timing duration of the timer is reset.

In one possible implementation, in response to the presentation time of the virtual scene picture being greater than the third duration, the auxiliary picture is presented in the virtual scene picture.

And the terminal starts to detect and determine the target virtual vehicle in real time after the time length recorded by the timer exceeds the third time length.

For example, taking a racing game as an example, when each virtual vehicle enters a race, the race is automatically performed to start countdown, when the countdown is finished, the racing mode is formally entered, the auxiliary picture is not displayed within a third time length when the racing mode is started, or the calculation and determination step of the target virtual vehicle is not performed, and when the starting time length of the racing mode exceeds the third time length, the auxiliary picture is displayed based on the target virtual vehicle.

In another possible implementation, in response to a distance between the first virtual vehicle from the position where the movement is started to the current position being greater than a specified distance, an auxiliary screen is presented in the virtual scene screen.

Since each virtual vehicle starts to move from the same starting line at the starting point starting position, a phenomenon that a target virtual object changes frequently may exist at the starting stage, and both the above two modes are used for controlling the first virtual vehicle not to display the auxiliary picture near the starting point starting position, so that meaningless auxiliary picture display can be avoided, and terminal resources are saved.

In one possible implementation, in response to the relative distance between the target virtual vehicle and the first virtual vehicle being less than or equal to the second distance, the lens direction of the virtual camera is maintained, and the auxiliary picture in the lens direction captured by the virtual camera is shown on the virtual scene picture.

When the relative distance between the first virtual vehicle and the target virtual vehicle is very close and reaches a second distance which is a minimum effective distance corresponding to the lens of the virtual camera, at the moment, the lens of the virtual camera stops moving along with the position of the target virtual vehicle and keeps still, the lens focus of the virtual camera is still the target virtual vehicle, and if the relative distance between the target virtual vehicle and the first virtual vehicle is larger than the second distance, the lens direction of the virtual camera continues to move along with the target virtual vehicle.

Step 508, in response to the target virtual vehicle being switched from the first target virtual vehicle to the second target virtual vehicle during the auxiliary screen display, resetting the screen display timer.

In the embodiment of the application, when the auxiliary picture has already been displayed in the virtual scene picture, in the process of moving each virtual vehicle, when the virtual vehicle behind the first virtual vehicle and having the smallest relative distance to the first virtual vehicle changes, and the first target virtual vehicle changes into the second target virtual vehicle, the picture display timer needs to be reset and cleared.

Wherein the first target virtual vehicle and the second target virtual vehicle are any two of the at least one second virtual vehicle.

In one possible implementation, after the target virtual vehicle is switched from the first target virtual vehicle to the second target virtual vehicle, the focus of the virtual camera is switched from the first target virtual vehicle to the second target virtual vehicle.

When the target virtual vehicle is switched from the first target virtual vehicle to the second target virtual vehicle, the virtual scene displayed in the auxiliary picture is switched from the picture shot by taking the first target virtual vehicle as a focus to the picture shot by taking the second target virtual vehicle as a focus.

For example, taking a racing game as an example, fig. 9 is a schematic view of focus switching corresponding to an auxiliary screen according to an embodiment of the present application. As shown in fig. 9, if the first virtual vehicle 93 and the second virtual vehicle 94 are provided behind the first virtual vehicle 91, the auxiliary screen 92 is displayed on the current virtual scene screen, and the relative distance between the first virtual vehicle 93 and the first virtual vehicle is the smallest at the current time, so that the auxiliary screen 92 is a screen photographed with the first virtual vehicle 93 as a focus. At a later time, the second target virtual vehicle 94 passes the first target virtual vehicle 93 and becomes a virtual vehicle having the smallest relative distance to the first virtual vehicle, and therefore the focus of the virtual camera is switched to the second target virtual vehicle to perform image capturing.

In one possible implementation manner, a line pattern for indicating the sprint effect is added on the auxiliary picture.

For example, as shown in fig. 9, the line pattern 95 of the sprint effect is located at the edge of the auxiliary screen 92, and by adding the line pattern 95 of the sprint effect, the tension of the user can be enhanced, thereby improving the operation experience of the user.

In step 509, in response to the time duration corresponding to the frame display timer reaching the first time duration, the display of the auxiliary frame is ended.

In the embodiment of the application, when the duration recorded by the terminal to obtain the picture display timer reaches the first duration, displaying the auxiliary picture on the virtual scene picture is finished.

That is, in response to the target virtual vehicle being switched from the first target virtual vehicle to the second target virtual vehicle during the auxiliary screen display, the screen display timer is reset, and when the duration of time that the focus of the virtual camera is continuously maintained on the same virtual vehicle reaches the first duration, the display of the auxiliary screen on the virtual scene screen is finished.

By the scheme, the virtual vehicle as the focus can be adjusted in real time in the auxiliary picture display process, and continuous display can be realized through smooth pictures. The method is beneficial to the user to acquire the effective position information of the rear virtual vehicle through the auxiliary picture in the operation process.

And step 510, in response to that the duration time of the first obtuse angle greater than the first angle reaches a second duration, ending displaying the auxiliary picture.

In the embodiment of the application, when the duration that the first obtuse angle between the target lens direction and the vehicle tail reference line is greater than the first angle reaches the second duration, displaying the auxiliary picture can be finished.

In a possible implementation manner, when the lens direction of the virtual camera is rotated to the maximum angle, the target virtual vehicle is in a state of being partially in the auxiliary picture or not being in the auxiliary picture at all, so in order to make the auxiliary picture show meaningful picture contents as much as possible, when the time for which the lens direction of the virtual camera is rotated to the maximum angle continuously reaches the second time length, the showing of the auxiliary picture is finished.

The second duration may be smaller than the first duration, that is, the auxiliary screen finished in the above manner may finish displaying the auxiliary screen in advance with respect to the scheme shown in step 509.

For example, fig. 10 is a schematic view of an auxiliary picture when a first obtuse angle between a target lens direction and a vehicle tail reference line is greater than a first angle according to an embodiment of the present application. As shown in fig. 10, the target virtual vehicle behind the first virtual vehicle is in a passing state, and the first obtuse angle corresponding to the target virtual vehicle is greater than the first angle, so that the displayed auxiliary screen 1001 does not include the target virtual vehicle, but only includes the screen at the edge of the track. Since the display of the course edge screen using the auxiliary screen 1001 does not have an actual gain to the user's operation, if the auxiliary screen 1001 is still finished when the display duration reaches the first duration, the terminal resources may be wasted.

In summary, according to the scheme shown in the application, the target virtual vehicle is determined by detecting the relative distance between the first virtual vehicle and the second virtual vehicle in real time, the virtual camera shoots the virtual scene with the target virtual vehicle as a focus, and the shot auxiliary picture is displayed. Due to the fact that the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the target virtual vehicle corresponding to each moment can be determined flexibly through the scheme, and the auxiliary picture taking the target virtual vehicle as a focus is displayed, so that the auxiliary picture can display an effective picture as much as possible, the efficiency of the auxiliary picture for transmitting information beneficial to user operation is improved, the user can observe effective picture content behind the vehicle while normally observing the picture in front of the virtual vehicle, and the interaction efficiency of the user in controlling the virtual vehicle is improved.

Taking a virtual scene as an example of a virtual scene in a racing game, please refer to fig. 11, which shows a logic flow chart of a method for displaying a picture in a virtual scene according to an exemplary embodiment of the present application, and as shown in fig. 11, the logic flow may include the following steps:

the terminal detects whether other virtual vehicles exist in a trigger range of the first virtual vehicle, wherein the trigger range can be a range which is behind the first virtual vehicle and has a relative distance smaller than the first distance, and when the fact that the other virtual vehicles exist in a punishment range is detected, the current state of the first virtual vehicle is judged (S1101). If it is determined that the first virtual vehicle is currently in a state of just starting to rush out the starting point (S1102), the mirror function, which is a function of displaying an auxiliary screen, is not triggered (S1103). If it is determined that the first virtual vehicle is not currently in a state of just starting the run out (S1104), it is determined that the mirror function is triggered (S1105). Then, if it is determined by real-time detection that the target virtual vehicle is always within the trigger range of the first virtual vehicle (S1106), the screen of the target virtual vehicle is tracked and shot by the virtual camera (S1107), if the target virtual vehicle leaves the trigger range corresponding to the first virtual vehicle during shooting (S1108), the virtual camera is controlled to leave the tracking and shot the target virtual vehicle (S1109), and if the target virtual vehicle leaves the trigger range and returns to the trigger range of the first virtual vehicle within a predetermined time (e.g., 3 seconds) (S1110), the screen of the target virtual vehicle is continuously tracked and shot by the virtual camera (S1111). If the display duration of the auxiliary screen has reached the specified maximum display duration, which may be 3 seconds, for example, the control unit turns off the rearview mirror function and ends the display of the auxiliary screen (S1112).

In summary, according to the scheme shown in the application, the target virtual vehicle is determined by detecting the relative distance between the first virtual vehicle and the second virtual vehicle in real time, the virtual camera shoots the virtual scene with the target virtual vehicle as a focus, and the shot auxiliary picture is displayed. Due to the fact that the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the target virtual vehicle corresponding to each moment can be determined flexibly through the scheme, and the auxiliary picture taking the target virtual vehicle as a focus is displayed, so that the auxiliary picture can display an effective picture as much as possible, the efficiency of the auxiliary picture for transmitting information beneficial to user operation is improved, the user can observe effective picture content behind the vehicle while normally observing the picture in front of the virtual vehicle, and the interaction efficiency of the user in controlling the virtual vehicle is improved.

Fig. 12 is a block diagram illustrating a configuration of a picture presentation apparatus in a virtual scene according to an exemplary embodiment. The picture showing device in the virtual scene can be used in a computer device to execute all or part of the steps in the method shown in the corresponding embodiment of fig. 3 or fig. 5. The picture showing device in the virtual scene can comprise:

a home screen display module 1210 configured to display a virtual scene screen, where the virtual scene screen includes a first virtual vehicle;

a target determination module 1220 for determining a target virtual vehicle from among the at least one second virtual vehicle based on a relative distance between the first virtual vehicle and the second virtual vehicle; the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle;

an auxiliary picture display module 1230, configured to display an auxiliary picture in the virtual scene picture; the auxiliary picture is a picture which takes the target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle.

In one possible implementation, the goal determining module 1220 includes:

the candidate obtaining submodule is used for obtaining a virtual vehicle to be selected; the candidate virtual vehicle is the second virtual vehicle, the relative distance between the candidate virtual vehicle and the first virtual vehicle is smaller than or equal to a first distance;

and the first target determination submodule is used for determining the target virtual vehicle from the virtual vehicles to be selected.

In one possible implementation, the goal determining sub-module includes:

and the target determining unit is used for determining the virtual vehicle to be selected with the smallest relative distance with the first virtual vehicle as the target virtual vehicle.

In one possible implementation, the goal determining module 1220 includes:

a first obtaining sub-module configured to obtain the second virtual vehicle whose relative distance to the first virtual vehicle is smallest;

a second target determination submodule to determine the second virtual vehicle as the target virtual vehicle in response to the relative distance between the second virtual vehicle and the first virtual vehicle being less than or equal to a first distance.

In one possible implementation, the apparatus further includes:

a distance acquisition module to acquire a relative distance between the first virtual vehicle and at least one second virtual vehicle before determining a target virtual vehicle from the second virtual vehicles based on the relative distance between the first virtual vehicle and the second virtual vehicle.

In a possible implementation manner, the distance obtaining module includes:

and the distance acquisition submodule is used for determining the length of a connecting line between the tail of the first virtual vehicle and the central point of the second virtual vehicle as the relative distance.

In one possible implementation, the apparatus further includes:

a timing module for starting a picture presentation timer in response to the auxiliary picture being presented in the virtual scene picture; the picture display timer is used for recording the continuous display duration of the auxiliary picture in the virtual scene picture;

and the first picture ending module is used for responding to the time length corresponding to the picture display timer reaching a first time length and ending the display of the auxiliary picture.

In one possible implementation, the apparatus further includes:

the timing resetting module is used for responding to that the time length corresponding to the picture display timing reaches a first time length and resetting the picture display timer in response to that the target virtual vehicle is switched from a first target virtual vehicle to a second target virtual vehicle in the auxiliary picture display process before the auxiliary picture is displayed; the first target virtual vehicle and the second target virtual vehicle are any two of the at least one second virtual vehicle.

In one possible implementation, the virtual camera is located diagonally above the first virtual vehicle, and the virtual camera moves with the first virtual vehicle;

the device further comprises:

the obtuse angle acquisition module is used for acquiring a first obtuse angle between the direction of a target lens and a vehicle tail reference line before an auxiliary picture is displayed in the virtual scene picture; the target lens direction is a direction pointing from the virtual camera to a center point of the target virtual vehicle; the vehicle tail reference line is a straight line where the vehicle tail of the first virtual vehicle is located, and is parallel to the horizontal plane and perpendicular to a connecting line of the vehicle head and the vehicle tail of the first virtual vehicle;

a first direction determination module, configured to determine a first lens direction of the virtual camera based on a position of the target virtual vehicle at a current time in response to the first obtuse angle being less than or equal to a first angle; the first lens direction is the target lens direction.

In one possible implementation, the apparatus further includes:

a second direction determination module to determine a second lens direction of the virtual camera in response to the first obtuse angle being greater than the first angle; the second lens direction points to a position between the target lens direction and the vehicle tail pointing direction, and a second obtuse angle between the second lens direction and the vehicle tail reference line is the first angle; the vehicle rear pointing direction is a direction pointing from a head of the first virtual vehicle to a rear of the vehicle.

In one possible implementation, the apparatus further includes:

and the second picture ending module is used for responding to the fact that the duration time of the first obtuse angle between the target lens direction and the vehicle tail reference line is greater than the first angle and reaches a second duration time, and ending displaying of the auxiliary picture.

In one possible implementation, the auxiliary screen displaying module 1230 includes:

a direction determination submodule for maintaining a lens direction of the virtual camera in response to the relative distance between the target virtual vehicle and the first virtual vehicle being less than or equal to a second distance;

and the picture shooting submodule is used for displaying the auxiliary picture shot by the virtual camera in the lens direction on the virtual scene picture.

In one possible implementation, the auxiliary screen displaying module 1230 includes:

and the auxiliary picture display submodule is used for responding that the display time of the virtual scene picture is longer than a third duration, and displaying the auxiliary picture in the virtual scene picture.

In summary, according to the scheme shown in the application, the target virtual vehicle is determined by detecting the relative distance between the first virtual vehicle and the second virtual vehicle in real time, the virtual camera shoots the virtual scene with the target virtual vehicle as a focus, and the shot auxiliary picture is displayed. Due to the fact that the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the target virtual vehicle corresponding to each moment can be determined flexibly through the scheme, and the auxiliary picture taking the target virtual vehicle as a focus is displayed, so that the auxiliary picture can display an effective picture as much as possible, the efficiency of the auxiliary picture for transmitting information beneficial to user operation is improved, the user can observe effective picture content behind the vehicle while normally observing the picture in front of the virtual vehicle, and the interaction efficiency of the user in controlling the virtual vehicle is improved.

Fig. 13 is a block diagram illustrating a configuration of a picture presentation apparatus in a virtual scene according to an exemplary embodiment. The device for displaying pictures in a virtual scene can be used in a terminal to execute all or part of the steps executed by the terminal in the method shown in the corresponding embodiment of fig. 4 or fig. 5. The picture showing device in the virtual scene can comprise:

a home screen displaying module 1310, configured to display a virtual scene screen, where the virtual scene screen includes a first virtual vehicle;

a first auxiliary screen displaying module 1320, configured to display a first auxiliary screen in the virtual scene screen; the first auxiliary picture is a picture which takes a first target virtual vehicle as a focus and is shot by a virtual camera arranged corresponding to the first virtual vehicle; the first target virtual vehicle is a virtual vehicle having a smallest relative distance to the first virtual vehicle, and the relative distance is smaller than or equal to a first distance;

a second auxiliary image displaying module 1330, configured to display a second auxiliary image in the virtual scene image in response to the fact that the virtual vehicle that has the smallest relative distance to the first virtual vehicle and the relative distance that is less than or equal to the first distance is transformed into a second target virtual vehicle, where the second auxiliary image is an image that is photographed by the virtual camera that is set corresponding to the first virtual vehicle, with the second target virtual vehicle as a focus.

In summary, according to the scheme shown in the application, the target virtual vehicle is determined by detecting the relative distance between the first virtual vehicle and the second virtual vehicle in real time, the virtual camera shoots the virtual scene with the target virtual vehicle as a focus, and the shot auxiliary picture is displayed. Due to the fact that the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the target virtual vehicle corresponding to each moment can be determined flexibly through the scheme, and the auxiliary picture taking the target virtual vehicle as a focus is displayed, so that the auxiliary picture can display an effective picture as much as possible, the efficiency of the auxiliary picture for transmitting information beneficial to user operation is improved, the user can observe effective picture content behind the vehicle while normally observing the picture in front of the virtual vehicle, and the interaction efficiency of the user in controlling the virtual vehicle is improved.

FIG. 14 is a block diagram illustrating the structure of a computer device 1400 in accordance with an exemplary embodiment. The computer device 1400 may be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a laptop computer, or a desktop computer. Computer device 1400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.

Generally, computer device 1400 includes: a processor 1401, and a memory 1402.

Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1401 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). Processor 1401 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.

Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one instruction for execution by processor 1401 to implement all or part of the steps of a method provided by the method embodiments herein.

In some embodiments, computer device 1400 may also optionally include: a peripheral device interface 1403 and at least one peripheral device. The processor 1401, the memory 1402, and the peripheral device interface 1403 may be connected by buses or signal lines. Each peripheral device may be connected to the peripheral device interface 1403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, a display 1405, a camera assembly 1406, audio circuitry 1407, a positioning assembly 1408, and a power supply 1409.

The peripheral device interface 1403 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1401 and the memory 1402. In some embodiments, the processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1401, the memory 1402, and the peripheral device interface 1403 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.

The Radio Frequency circuit 1404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.

The display screen 1405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to capture touch signals at or above the surface of the display screen 1405. The touch signal may be input to the processor 1401 for processing as a control signal. At this point, the display 1405 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1405 may be one, providing the front panel of the computer device 1400; in other embodiments, the display 1405 may be at least two, respectively disposed on different surfaces of the computer device 1400 or in a folded design; in still other embodiments, the display 1405 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 1400. Even further, the display 1405 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1405 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.

The camera assembly 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.

The audio circuit 1407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1401 for processing or inputting the electric signals to the radio frequency circuit 1404 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and located at different locations on the computer device 1400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is then used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1407 may also include a headphone jack.

The Location component 1408 is operable to locate a current geographic Location of the computer device 1400 for navigation or LBS (Location Based Service). The Positioning component 1408 may be based on the Global Positioning System (GPS) in the united states, the beidou System in china, the Global Navigation Satellite System (GLONASS) in russia, or the galileo System in europe.

The power supply 1409 is used to power the various components of the computer device 1400. The power source 1409 may be alternating current, direct current, disposable or rechargeable. When the power source 1409 comprises a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.

In some embodiments, computer device 1400 also includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: acceleration sensor 1411, gyroscope sensor 1412, pressure sensor 1413, fingerprint sensor 1414, optical sensor 1415, and proximity sensor 1416.

The acceleration sensor 1411 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the computer apparatus 1400. For example, the acceleration sensor 1411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1401 can control the touch display 1405 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1411. The acceleration sensor 1411 may also be used for the acquisition of motion data of a game or a user.

The gyro sensor 1412 may detect a body direction and a rotation angle of the computer device 1400, and the gyro sensor 1412 may cooperate with the acceleration sensor 1411 to collect a 3D motion of the user on the computer device 1400. The processor 1401 can realize the following functions according to the data collected by the gyro sensor 1412: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.

The pressure sensors 1413 may be disposed on the side bezel of the computer device 1400 and/or underneath the touch display 1405. When the pressure sensor 1413 is disposed on the side frame of the computer device 1400, the user's holding signal to the computer device 1400 can be detected, and the processor 1401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1413. When the pressure sensor 1413 is disposed at the lower layer of the touch display 1405, the processor 1401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.

The fingerprint sensor 1414 is used for collecting a fingerprint of a user, and the processor 1401 identifies the user according to the fingerprint collected by the fingerprint sensor 1414, or the fingerprint sensor 1414 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 1401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. The fingerprint sensor 1414 may be disposed on the front, back, or side of the computer device 1400. When a physical key or vendor Logo is provided on the computer device 1400, the fingerprint sensor 1414 may be integrated with the physical key or vendor Logo.

The optical sensor 1415 is used to collect ambient light intensity. In one embodiment, processor 1401 can control the display brightness of touch display 1405 based on the ambient light intensity collected by optical sensor 1415. Specifically, when the ambient light intensity is high, the display luminance of the touch display 1405 is increased; when the ambient light intensity is low, the display brightness of the touch display 1405 is turned down. In another embodiment, the processor 1401 can also dynamically adjust the shooting parameters of the camera assembly 1406 according to the intensity of the ambient light collected by the optical sensor 1415.

A proximity sensor 1416, also known as a distance sensor, is typically provided on the front panel of the computer device 1400. The proximity sensor 1416 is used to capture the distance between the user and the front of the computer device 1400. In one embodiment, the touch display 1405 is controlled by the processor 1401 to switch from a bright screen state to a dark screen state when the proximity sensor 1416 detects that the distance between the user and the front of the computer device 1400 is gradually decreasing; when the proximity sensor 1416 detects that the distance between the user and the front of the computer device 1400 is gradually increasing, the processor 1401 controls the touch display 1405 to switch from the breath-screen state to the bright-screen state.

Those skilled in the art will appreciate that the architecture shown in FIG. 14 is not intended to be limiting of the computer device 1400, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.

In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiments of fig. 3 or fig. 4 or fig. 5 is also provided. For example, the non-transitory computer readable storage medium may be a ROM (Read-Only Memory), a Random Access Memory (RAM), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.

According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the method for displaying the picture in the virtual scene provided in the various optional implementation modes of the above aspects.

Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种游戏界面的交互方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类