Method, device, equipment and medium for judging virtual surface in virtual world

文档序号:1452584 发布日期:2020-02-21 浏览:24次 中文

阅读说明:本技术 虚拟世界中的虚拟表面判断方法、装置、设备及介质 (Method, device, equipment and medium for judging virtual surface in virtual world ) 是由 黄晓权 于 2019-11-08 设计创作,主要内容包括:本申请公开了一种虚拟世界中的虚拟表面判断方法、装置、设备及介质,涉及虚拟世界领域。该方法包括:显示虚拟世界画面,所述虚拟世界画面是以主控虚拟角色的视角观察所述虚拟世界得到的画面,所述虚拟世界包括用于供所述主控虚拟角色在其上活动的虚拟表面,所述虚拟表面包括平面、斜面和曲面中的至少一种;在所述虚拟表面上选中目标区域,所述目标区域是所述虚拟表面上与目标事件相关的表面区域;当所述目标区域是平缓区域时,执行所述目标事件,所述平缓区域的平整程度高于目标条件。解决了相关技术中,只根据一个位置点和位置点所在平面的法向量无法判断虚拟表面是否适合放置该虚拟道具的问题。(The application discloses a method, a device, equipment and a medium for judging a virtual surface in a virtual world, and relates to the field of virtual worlds. The method comprises the following steps: displaying a virtual world picture, wherein the virtual world picture is a picture obtained by observing the virtual world from the view angle of a main control virtual character, the virtual world comprises a virtual surface for the main control virtual character to move on, and the virtual surface comprises at least one of a plane, an inclined plane and a curved surface; selecting a target area on the virtual surface, the target area being a surface area on the virtual surface that is associated with a target event; when the target area is a gentle area, the target event is executed, and the smoothness of the gentle area is higher than a target condition. The problem of in the correlation technique, can't judge the virtual surface and be fit for placing this virtual stage property according to the normal vector of a position point and position point place plane only is solved.)

1. A method for judging a virtual surface in a virtual world is applied to a terminal, wherein an application program supporting the virtual world runs in the terminal, and the method comprises the following steps:

displaying a virtual world picture, wherein the virtual world picture is a picture obtained by observing the virtual world from the view angle of a main control virtual character, the virtual world comprises a virtual surface for the main control virtual character to move on, and the virtual surface comprises at least one of a plane, an inclined plane and a curved surface;

selecting a target area on the virtual surface, the target area being a surface area on the virtual surface that is associated with a target event;

when the target area is a gentle area, the target event is executed, and the smoothness of the gentle area is higher than a target condition.

2. The method of claim 1, further comprising:

acquiring any two points on the target area, and determining a feature vector of the target area according to the any two points, wherein the feature vector is used for representing the flatness of the target area;

and when the included angle between the characteristic vector and the horizontal plane is smaller than a first threshold value, determining the target area as the gentle area.

3. The method of claim 2, wherein the obtaining any two points on the target area and determining the feature vector of the target area according to the any two points comprises:

acquiring a first position point on the target area, wherein the first position point is any point on the target area;

emitting a first grid ray to the target area from a second position point directly above the first position point, wherein the first grid ray comprises at least two parallel rays with starting points located around the second position point and vertical to a horizontal plane;

acquiring a first impact result of the first grid ray and the target area, wherein the first impact result comprises at least two impact points;

determining the highest impact point in the first impact result as a first impact point, and determining the lowest impact point in the first impact result as a second impact point;

and determining a feature vector of the target area according to the first impact point and the second impact point.

4. The method according to claim 2, wherein the determining the target area as the gentle area when the included angle between the feature vector and the horizontal plane is smaller than a first threshold value comprises:

and when the included angle between the feature vector and the horizontal plane is smaller than a first threshold value and the height difference of the target area is smaller than a second threshold value, determining the target area as the gentle area, wherein the height difference is used for representing the fluctuation degree of the target area.

5. The method of claim 4, further comprising:

acquiring a first position point on the target area, wherein the first position point is any point on the target area;

emitting a second grid ray to the target area from a third position point above the first position point along a direction perpendicular to the characteristic vector; the connecting line of the third position point and the first position point is perpendicular to the characteristic vector, and the second grid ray comprises at least two parallel rays which have starting points positioned around the third position point and are emitted to a target area;

acquiring a second impact result of the second grid rays and the target area, wherein the second impact result comprises at least two impact points;

determining the highest impact point in the second impact result as a third impact point, and determining the lowest impact point in the second impact result as a fourth impact point;

and calculating the height difference of the target area according to the third impact point and the fourth impact point.

6. The method of any one of claims 1 to 5, wherein the target event is the placement of a virtual prop on the target area;

when the target area is a gentle area, executing the target event, including:

when the target area is a gentle area, placing the virtual prop on the target area.

7. The method according to any one of claims 1 to 5, wherein the performing the target event when the target area is a gentle area includes:

the target event is performed when the target area is a flat area and the target area is an open area, the open area being a surface area above the virtual surface that is not obscured by obstacles.

8. The method of claim 7, further comprising:

acquiring a first position point on the target area, wherein the first position point is any point on the target area;

emitting a first ray upward from the first location point in a direction perpendicular to a horizontal plane;

determining that the target area is the open area when the first ray does not acquire an impact result.

9. The method of any one of claims 1 to 8, wherein the target event is the placement of a virtual prop on the target area;

when the target area is a gentle area, executing the target event, including:

when the target area is a gentle area and the target area is an open area, placing the virtual prop on the target area, wherein the open area is a surface area which is not covered by the barrier above the virtual surface.

10. The method of any one of claims 1 to 9, wherein said centering a target area on said virtual surface comprises:

and selecting the target area on the virtual surface according to the visual angle focus of the main control virtual character.

11. The method of claim 10, wherein the selecting the target area on the virtual surface according to the perspective focus of the master virtual character comprises:

acquiring the position and the shooting direction of a camera, wherein the camera is a camera corresponding to the visual angle of the main control virtual character;

emitting a second ray from the position of the camera in the image capture direction;

acquiring a target intersection point of the second ray and the virtual surface;

determining the target intersection point as a first location point;

determining the virtual surface in the vicinity of the first location point as the target region.

12. The method of claim 3 or 5, wherein each of the at least two parallel rays is a spherical ray, and the spherical ray is a path that exits a sphere in one direction from a point.

13. An apparatus for determining a virtual surface in a virtual world, the apparatus being applied to a terminal in which an application program supporting the virtual world runs, the apparatus comprising:

the virtual world picture is a picture obtained by observing the virtual world from the view angle of a main control virtual character, the virtual world comprises a virtual surface for the main control virtual character to move on, and the virtual surface comprises at least one of a plane, an inclined plane and a curved plane;

a selection module for selecting a target area on the virtual surface, the target area being a surface area on the virtual surface related to a target event;

and the execution module is used for executing the target event when the target area is a gentle area, and the smoothness degree of the gentle area is higher than the target condition.

14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the method of virtual surface determination in a virtual world according to any one of claims 1 to 12.

15. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method for determining a virtual surface in a virtual world according to any one of claims 1 to 12.

Technical Field

The embodiment of the application relates to the field of virtual worlds, in particular to a method, a device, equipment and a medium for judging virtual surfaces in a virtual world.

Background

In the application program of the virtual world, the virtual world includes a virtual model. Virtual models are used to simulate the real world, e.g., automobile models, mountain models, house models, etc. There is a virtual surface on the outer surface of the virtual model, for example, the outer surface of the stone model has a plane, an inclined plane, a curved surface, etc.

Disclosure of Invention

The embodiment of the application provides a method, a device, equipment and a medium for judging a virtual surface in a virtual world, and can solve the problem that whether the virtual surface is suitable for placing a virtual prop cannot be judged only according to a position point and a normal vector of a plane where the position point is located in the related technology. The technical scheme is as follows:

in one aspect, a method for determining a virtual surface in a virtual world is provided, where the method is applied in a terminal, and an application program supporting the virtual world runs in the terminal, and the method includes:

displaying a virtual world picture, wherein the virtual world picture is a picture obtained by observing the virtual world from the view angle of a main control virtual character, the virtual world comprises a virtual surface for the main control virtual character to move on, and the virtual surface comprises at least one of a plane, an inclined plane and a curved surface;

selecting a target area on the virtual surface, the target area being a surface area on the virtual surface that is associated with a target event;

when the target area is a gentle area, the target event is executed, and the smoothness of the gentle area is higher than a target condition.

In another aspect, there is provided an apparatus for determining a virtual surface in a virtual world, the apparatus being applied to a terminal in which an application program supporting the virtual world runs, the apparatus including:

the virtual world picture is a picture obtained by observing the virtual world from the view angle of a main control virtual character, the virtual world comprises a virtual surface for the main control virtual character to move on, and the virtual surface comprises at least one of a plane, an inclined plane and a curved plane;

a selection module for selecting a target area on the virtual surface, the target area being a surface area on the virtual surface related to a target event;

and the execution module is used for executing the target event when the target area is a gentle area, and the smoothness degree of the gentle area is higher than the target condition.

In another aspect, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the method for determining a virtual surface in a virtual world as described above.

In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by the processor to implement the method for determining a virtual surface in a virtual world as described above.

The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:

whether the target area is suitable for executing the target event is obtained by obtaining a target area on the virtual surface and judging whether the target area is a gentle area. Whether the target area meets the condition of the target event is determined by integrally judging the whole surface area related to the target event. The problem of in the correlation technique only obtain the normal vector of target area according to a point on the target area, directly place virtual stage property according to this point and normal vector, can't judge whether virtual surface is fit for placing virtual stage property is solved.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a schematic structural diagram of a terminal provided in an exemplary embodiment of the present application;

FIG. 2 is a block diagram of a computer system provided in an exemplary embodiment of the present application;

FIG. 3 is a flowchart of a method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application;

FIG. 4 is a flowchart of a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

fig. 5 is a schematic view of a virtual world screen of a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

FIG. 6 is a schematic view of a camera model corresponding to a perspective of a master virtual object provided by another exemplary embodiment of the present application;

FIG. 7 is a flowchart of a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

FIG. 8 is a flowchart of a method for determining a virtual surface in a virtual world, according to another exemplary embodiment of the present application;

FIG. 9 is a schematic diagram of a virtual surface determination method in a virtual world according to another exemplary embodiment of the present application, when implemented, acquiring a first location point;

FIG. 10 is a schematic view of a spherical ray provided by another exemplary embodiment of the present application;

FIG. 11 is a flowchart of a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

FIG. 12 is a block ray diagram provided by another exemplary embodiment of the present application;

FIG. 13 is a block ray diagram provided by another exemplary embodiment of the present application;

FIG. 14 is a first grid ray diagram illustrating a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

FIG. 15 is a schematic normal vector diagram of a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

FIG. 16 is a flowchart of a method for determining a virtual surface in a virtual world, according to another exemplary embodiment of the present application;

FIG. 17 is a flowchart of a method for determining a virtual surface in a virtual world, according to another exemplary embodiment of the present application;

FIG. 18 is a second grid ray diagram illustrating a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

fig. 19 is a schematic view of a virtual world screen of a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

FIG. 20 is a flowchart of a method for determining a virtual surface in a virtual world, according to another exemplary embodiment of the present application;

FIG. 21 is a flowchart of a method for determining a virtual surface in a virtual world, according to another exemplary embodiment of the present application;

fig. 22 is a schematic view of a virtual world screen of a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

FIG. 23 is a flowchart of a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

fig. 24 is a schematic view of a virtual world screen of a method for determining a virtual surface in a virtual world according to another exemplary embodiment of the present application;

FIG. 25 is a flowchart of a method for determining a virtual surface in a virtual world, according to another exemplary embodiment of the present application;

fig. 26 is a block diagram of a virtual surface determination apparatus in a virtual world according to another exemplary embodiment of the present application;

fig. 27 is a block diagram of a terminal provided in an exemplary embodiment of the present application.

Detailed Description

To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

First, terms referred to in the embodiments of the present application are briefly described:

virtual world: is a virtual world that is displayed (or provided) when an application program runs on a terminal. The virtual world may be a simulated world of a real world, a semi-simulated semi-fictional world, or a purely fictional world. The virtual world may be any one of a two-dimensional virtual world, a 2.5-dimensional virtual world, and a three-dimensional virtual world, which is not limited in this application. The following embodiments are exemplified in the case where the virtual world is a three-dimensional virtual world.

Virtual model: is a model in a virtual world that mimics the real world. Illustratively, the virtual model occupies a certain volume in the virtual world. Illustratively, the virtual model includes: the system comprises a terrain model, a building model, an animal and plant model, a virtual prop model, a virtual carrier model and a virtual role model. For example, the terrain model includes: ground, mountains, water flows, stones, steps, and the like; the building model includes: house, enclosure, container, and fixed facilities inside the building: tables, chairs, cabinets, beds, etc.; the animal and plant model comprises: trees, flowers, plants, birds, etc.; the virtual prop model comprises: firearms, medicine boxes, air drops, etc.; the virtual vehicle model includes: automobiles, boats, helicopters, etc.; the virtual character model comprises: humans, animals, cartoon characters, etc.

Virtual surface: is the outer surface of the virtual model. The virtual surface is the outer surface of all virtual models in the virtual world. Illustratively, the virtual surface includes at least one of a plane, a slope, and a curved surface.

Virtual roles: refers to a movable object in a virtual world. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the three-dimensional virtual world. Optionally, the virtual character is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual character has its own shape and volume in the three-dimensional virtual world, and occupies a part of the space in the three-dimensional virtual world.

User Interface (UI) (user interface) controls, any visual control or element that can be seen on the user interface of the application, such as controls of a picture, an input box, a text box, a button, a label, etc., wherein some of the UI controls respond to the operation of the user, such as a movement control, to control the virtual character to move in the virtual world. And the user triggers the mobile control to control the virtual character to move forward, backward, leftwards and rightwards, climb, swim, jump and the like. The UI control referred to in the embodiments of the present application includes, but is not limited to: and placing a control by the virtual prop.

The method provided by the application can be applied to an application program supporting the virtual world. Illustratively, an application that supports the virtual world is one in which a user can control the movement of a virtual character within the virtual world. By way of example, the methods provided herein may be applied to: any one of a virtual Reality application program, an Augmented Reality (AR) program, a three-dimensional map program, a military simulation program, a virtual Reality Game, an Augmented Reality Game, a First-person shooter Game (FPS), a Third-person shooter Game (TPS), and a Multiplayer Online Battle sports Game (MOBA).

Illustratively, a game in the virtual world is composed of one or more maps of the game world, the virtual world in the game simulates a scene of the real world, a user can control a virtual character in the game to perform actions of walking, running, jumping, shooting, fighting, driving, attacking other virtual characters by using virtual weapons, and the like in the virtual world, the interactivity is strong, and a plurality of users can form a team on line to perform a competitive game.

In some embodiments, the application may be a shooting game, a racing game, a role playing game, an adventure game, a sandbox game, a tactical competition game, a military simulation program, or the like. The client can support at least one operating system of a Windows operating system, an apple operating system, an android operating system, an IOS operating system and a LINUX operating system, and the clients of different operating systems can be interconnected and intercommunicated. In some embodiments, the client is a program adapted to a mobile terminal having a touch screen.

In some embodiments, the client is an application developed based on a three-dimensional engine, such as the three-dimensional engine being a Unity engine.

The terminal in the present application may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4) player, and so on. The terminal is installed and operated with an application program supporting a virtual world, such as an application program supporting a three-dimensional virtual world. The application program may be any one of a Battle Royal (BR) game, a virtual reality application program, an augmented reality program, a three-dimensional map program, a military simulation program, a third person shooter game, a first person shooter game, and a multiplayer online tactic competition game. Alternatively, the application may be a stand-alone application, such as a stand-alone 3D game program, or may be a network online application.

Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application. As shown in fig. 1, the terminal includes a processor 21, a touch screen 22, and a memory 23.

The processor 21 may be at least one of a single-core processor, a multi-core processor, an embedded chip, and a processor having instruction execution capability.

The touch screen 22 includes a general touch screen or a pressure sensitive touch screen. The general touch screen can measure a pressing operation or a sliding operation applied to the touch screen 22; the pressure sensitive touch screen can measure the degree of pressure applied to the touch screen 22.

The memory 23 stores an executable program of the processor 21. The memory 23 illustratively stores a virtual world program a, an application program B, an application program C, the touch pressure sensing module 18, and the kernel layer 19 of the operating system. The virtual world program a is an application program developed based on the three-dimensional virtual engine 17. Optionally, the virtual world program a includes, but is not limited to, at least one of a game program, a virtual reality program, a three-dimensional map program, and a three-dimensional presentation program developed by a three-dimensional virtual engine (also referred to as a virtual world engine) 17. For example, when the operating system of the terminal adopts an android operating system, the virtual world program a is developed by adopting Java programming language and C # language; for another example, when the operating system of the terminal is the IOS operating system, the virtual world program a is developed using the Object-C programming language and the C # language.

The three-dimensional Virtual engine 17 is a three-dimensional interactive engine supporting multiple operating system platforms, and illustratively, the three-dimensional Virtual engine may be used for program development in multiple fields, such as a game development field, a Virtual Reality (VR) field, and a three-dimensional map field, and the specific type of the three-dimensional Virtual engine 17 is not limited in the embodiment of the present application, and the following embodiment exemplifies that the three-dimensional Virtual engine 17 is a Unity engine.

The touch (and pressure) sensing module 18 is a module for receiving a touch event (and a pressure touch event) reported by the touch screen driver 191, and optionally, the touch sensing module may not have a pressure sensing function and does not receive a pressure touch event. The touch event includes: the type of touch event and the coordinate values, the type of touch event including but not limited to: a touch start event, a touch move event, and a touch down event. The pressure touch event comprises the following steps: a pressure value and a coordinate value of the pressure touch event. The coordinate value is used for indicating a touch position of the pressure touch operation on the display screen. Optionally, an abscissa axis is established in the horizontal direction of the display screen, and an ordinate axis is established in the vertical upward direction of the display screen to obtain a two-dimensional coordinate system.

Illustratively, the kernel layer 19 includes a touch screen driver 191 and other drivers 192. The touch screen driver 191 is a module for detecting a pressure touch event, and when the touch screen driver 191 detects the pressure touch event, the pressure touch event is transmitted to the pressure sensing module 18.

The other drivers 192 may be drivers associated with the processor 21, drivers associated with the memory 23, drivers associated with network components, drivers associated with sound components, and the like.

Those skilled in the art will appreciate that the foregoing is merely a general illustration of the structure of the terminal. A terminal may have more or fewer components in different embodiments. For example, the terminal may further include a gravitational acceleration sensor, a gyro sensor, a power supply, and the like.

Fig. 2 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 500 includes: a first terminal 510, a server cluster 520, a second terminal 530.

The first terminal 510 is installed and operated with a client 511 supporting a virtual world, and the client 511 may be a multiplayer online battle program. When the first terminal runs the client 511, a user interface of the client 511 is displayed on the screen of the first terminal 510. The client may be any one of a virtual reality application program, an AR program, a three-dimensional map program, a military Simulation program, a virtual reality Game, an augmented reality Game, a first-person shooter Game, a third-person shooter Game, an MOBA Game, a tactical sports Game, and a strategic Game (SLG). In this embodiment, the third person at the client is called a shooting game for illustration. The first terminal 510 is a terminal used by the first user 512, and the first user 512 uses the first terminal 510 to control a first virtual character located in the virtual world to perform an activity, and the first virtual character may be referred to as a master virtual character of the first user 512. The activities of the first avatar include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first avatar is a first avatar, such as a simulated persona or an animated persona.

The second terminal 530 is installed and operated with a client 531 supporting a virtual world, and the client 531 may be a multiplayer online battle program. When the second terminal 530 runs the client 531, a user interface of the client 531 is displayed on the screen of the second terminal 530. The client may be any one of a virtual reality application program, an AR program, a three-dimensional map program, a military simulation program, a virtual reality game, an augmented reality game, a first person shooter game, a third person shooter game, an MOBA game, a tactical competition game, and a strategy game, and in this embodiment, the client is the third person shooter game as an example. The second terminal 530 is a terminal used by the second user 532, and the second user 532 uses the second terminal 530 to control a second virtual character located in the virtual world to perform an activity, and the second virtual character may be referred to as a master virtual character of the second user 532. Illustratively, the second avatar is a second avatar, such as a simulated persona or an animated persona.

Optionally, the first virtual character and the second virtual character are in the same virtual world. Optionally, the first virtual role and the second virtual role may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first virtual character and the second virtual character may belong to different camps, different teams, different organizations, or have a hostile relationship.

Optionally, the clients installed on the first terminal 510 and the second terminal 530 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 510 may generally refer to one of a plurality of terminals, and the second terminal 530 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 510 and the second terminal 530. The device types of the first terminal 510 and the second terminal 530 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.

Only two terminals are shown in fig. 2, but there are a plurality of other terminals 540 that may access the server cluster 520 in different embodiments. In some embodiments, there is also at least one terminal 540 corresponding to the developer, a development and editing platform for a client of the virtual world is installed on the terminal 540, the developer can edit and update the client on the terminal 540 and transmit the updated client installation package to the server cluster 520 through a wired or wireless network, and the first terminal 510 and the second terminal 550 can download the client installation package from the server cluster 520 to update the client.

The first terminal 510, the second terminal 530, and the terminal 540 are connected to the server cluster 520 through a wireless network or a wired network.

The server cluster 520 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server cluster 520 is used for providing background services for the clients supporting the three-dimensional virtual world. Optionally, the server cluster 520 undertakes primary computing work and the terminals undertake secondary computing work; or, the server cluster 520 undertakes the secondary computing work, and the terminal undertakes the primary computing work; alternatively, a distributed computing architecture is adopted between the server cluster 520 and the terminals (the first terminal 510 and the second terminal 530) for performing the cooperative computing.

Optionally, the terminal and the server are both computer devices.

In one illustrative example, server cluster 520 includes server 521 and server 526, where server 521 includes processor 522, user account database 523, battle service module 524, and user-oriented Input/Output Interface (I/O Interface) 525. The processor 522 is configured to load an instruction stored in the server 521, and process data in the user account database 521 and the combat service module 524; the user account database 521 is used for storing data of user accounts used by the first terminal 510, the second terminal 530, and the other terminals 540, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 524 is used for providing a plurality of fight rooms for the users to fight against; the user-oriented I/O interface 525 is used to establish communication with the first terminal 510 and/or the second terminal 530 through a wireless network or a wired network to exchange data.

With reference to the description of the virtual world and the description of the implementation environment, the method for determining a virtual surface in the virtual world according to the embodiment of the present application is described, and an execution subject of the method is exemplified by the terminal shown in fig. 1. The terminal runs with an application program, which is a program supporting a virtual world.

For ease of understanding, first, a general description will be made of a virtual surface determination method in a virtual world provided in the present application.

Fig. 3 is a flowchart of a method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application. The execution of the method is illustrated by the terminal shown in fig. 1. Taking the application of the method to the placement scene of the virtual prop as an example, the method at least comprises the following steps.

Step 401, the terminal selects a target area which needs to execute a target event.

The terminal selects a target area before executing the target event.

For example, after the user triggers the virtual prop placement control, a virtual prop is displayed on the terminal right in front of the main control virtual character, and the user can control the placement position of the virtual prop by controlling the main control virtual character to move in the world. The placement position is a target area which is selected by the terminal and needs to execute the target event.

In step 402, the terminal determines whether the target area is flat.

Whether the target area is gentle is judged to the terminal, whether the target area gradient is too big promptly, and the virtual stage property is put and is gone to the landing, or whether the target area has great arch, and the virtual stage property can't steadily place etc..

Illustratively, the virtual surface determination method of the present application determines whether or not the target area is gentle from two aspects.

On the one hand, whether the gradient of the target area is too large is judged, for example, a wall surface perpendicular to the ground cannot be used for placing the virtual prop.

On the other hand, whether the height difference of the target area is too large is judged, for example, large stones exist on the ground, and the virtual prop cannot be stably placed on the ground.

In step 403, the terminal determines whether the tilt angle is too large.

The terminal determines whether the tilt angle of the target area is too large. If the tilt angle is smaller, continue to step 404; if the tilt angle is larger, go to step 408.

In step 404, the terminal determines whether the height difference is too large.

And the terminal judges whether the height difference of the target area is too large. If the height difference is smaller, continue to step 405; if the height difference is larger, go to step 408.

In step 405, the terminal determines that the target area is flat.

If the inclination angle and the height difference of the target area are small, the target area is a gentle area and is suitable for placing virtual props.

In step 406, the terminal determines whether the target area is wide.

For example, the placement condition of the partial virtual prop is that the placement area must be an open area. For example, a missile or a signal missile is launched to the sky after the partial virtual prop is placed, or the partial virtual prop is placed downwards from the sky after the user selects the target area. This requires that there must be no occlusion above the target area where the virtual prop is placed. It is also necessary to determine whether the target area is wide above.

If the target area is an open area, step 407 is performed, and if the target area is not an open area, step 408 is performed.

Step 407, the terminal executes the target event.

When the target area is a gentle and wide area, the target area meets the placing condition of the virtual prop, and at the moment, the virtual prop can be displayed in a placeable state on the terminal. For example, if a virtual item is in an undeplaceable location, the virtual item is displayed in red, and if the virtual item is in a placeable location, the virtual item is displayed in blue.

After the virtual prop is displayed in a placeable state, the user can trigger the virtual prop placing control again to place the virtual prop at the current position.

In step 408, the terminal determines that the target event cannot be executed.

If the virtual item cannot be placed in the target area, the terminal displays the virtual item in a non-placeable state, for example, the virtual item is displayed in red. At this time, even if the user triggers the virtual item placement control again, the virtual item cannot be put down.

To sum up, the virtual surface judgment method provided by the present application at least includes the following methods:

1. and judging whether the target area is gentle or not.

(1) The tilt angle of the target area is determined.

(2) And judging the height difference of the target area.

2. And judging whether the target area is wide.

The above methods may be used in combination to determine the target area, or may be used alone to determine the target area. For example, whether the target area is wide or not can be separately judged to determine whether the target area meets the occurrence condition of the target event or not; or, whether the height difference of the target area is smaller or not can be separately judged to determine whether the target area meets the occurrence condition of the target event or not; or whether the inclination angle of the target area is small may be separately judged to determine whether the target area meets the occurrence condition of the target event.

The following describes a method for determining a virtual surface in a virtual world provided by the present application in detail with several exemplary embodiments.

Fig. 4 is a flowchart of a method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application. The method is exemplified by the terminal shown in fig. 1, in which an application program supporting a virtual world runs, and at least includes the following steps.

Step 101, displaying a virtual world picture, wherein the virtual world picture is a picture obtained by observing a virtual world from a view angle of a main control virtual character, the virtual world comprises a virtual surface for the main control virtual character to move on, and the virtual surface comprises at least one of a plane, an inclined plane and a curved surface.

The terminal displays a virtual world picture, and the virtual world picture is a picture obtained by observing the virtual world from the view angle of the main control virtual role.

Illustratively, fig. 5 is a schematic view of a virtual world screen provided in an exemplary embodiment of the present application. The virtual world screen 600 is displayed on the virtual world enabled application, and a virtual surface for the master virtual character 601 to move on is included on the user interface 600.

The virtual surface is an outer surface of a virtual model in the virtual world. The outer surface of the virtual model is any visible surface on the virtual model. For example, the exterior surface of a virtual house includes: when the master control virtual role is positioned outside the virtual house, the external wall surface and the roof surface of the virtual house can be seen; when the master virtual character is located in the virtual house, the visible inner wall surface, floor surface, roof beam, and the surface of other virtual objects in the virtual house.

Illustratively, a virtual surface is a surface having an area that enables a master virtual character to move on the virtual surface. By way of example, activities include, but are not limited to, the master avatar may select the virtual surface, the master avatar may place an item on the virtual surface, apply a paint, walk, climb, install virtual props, build a virtual building, and the like.

Illustratively, the virtual surface includes at least one of all planes, inclined planes, curved surfaces and spherical surfaces in the virtual world, or a combined surface including at least two of the planes, the inclined planes, the curved surfaces and the spherical surfaces. Such as the ground, slopes, exterior surfaces of stone models, exterior surfaces of automobile models, roof surfaces of house models, step surfaces of stairs, etc.

The master virtual character is a virtual character controlled by the terminal. Illustratively, the master virtual character may be active within the virtual world under the control of the terminal. Illustratively, when the virtual world picture is a picture obtained by viewing the virtual world from a third person perspective, the master virtual character is an unknown virtual character located in the middle of the virtual world picture.

The view angle refers to an observation angle when the main control virtual character is observed in the virtual world from the first person view angle or the third person view angle. Optionally, in an embodiment of the present application, the viewing angle is an angle when the master virtual character is observed in the virtual world through the camera model.

Optionally, the camera model automatically follows the master virtual character in the virtual world, that is, when the position of the master virtual character in the virtual world changes, the camera model changes simultaneously with the position of the master virtual character in the virtual world, and the camera model is always within the preset distance range of the master virtual character in the virtual world. Optionally, in the automatic following process, the relative positions of the camera model and the master virtual character do not change.

The camera model is a three-dimensional model positioned around the main control virtual character in the virtual world, and when a first person perspective is adopted, the camera model is positioned near the head of the main control virtual character or positioned at the head of the main control virtual character; when a third person name visual angle is adopted, the camera model can be located behind the main control virtual character and bound with the main control virtual character, and also can be located at any position away from the main control virtual character by a preset distance, the main control virtual character located in the virtual world can be observed from different angles through the camera model, and optionally, when the third person name visual angle is the shoulder-passing visual angle of the first person name, the camera model is located behind the main control virtual character (such as the head and the shoulder of the main control virtual character). Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; the camera model may be located overhead of the head of the master virtual character when a top view is used, which is a view of viewing the virtual world from an overhead top view. Optionally, the camera model is not actually displayed in the virtual world, i.e. the camera model is not displayed in the virtual world displayed by the user interface.

For example, the camera model is located at any position away from the main control virtual character by a preset distance, optionally, one main control virtual character corresponds to one camera model, and the camera model may rotate with the main control virtual character as a rotation center, for example: the camera model is rotated by taking any point of the main control virtual character as a rotation center, the camera model rotates in angle and also deviates in displacement in the rotation process, and the distance between the camera model and the rotation center is kept unchanged in the rotation process, namely, the camera model rotates on the surface of a sphere taking the rotation center as the sphere center, wherein any point of the main control virtual character can be the head, the trunk or any point around the main control virtual character, and the embodiment of the application is not limited. Optionally, when the camera model observes the master control virtual character, the center of the view angle of the camera model points in the direction in which the point of the spherical surface where the camera model is located points at the center of the sphere.

Optionally, the camera model may further observe the master virtual character at a preset angle in different directions of the master virtual character.

Referring to fig. 6, schematically, a point is determined as a rotation center 12 in the master virtual character 11, and the camera model rotates around the rotation center 12, and optionally, the camera model is configured with an initial position, which is a position above and behind the master virtual character (for example, a rear position of the brain). Illustratively, as shown in fig. 6, the initial position is position 13, and when the camera model rotates to position 14 or position 15, the direction of the angle of view of the camera model changes as the camera model rotates.

Optionally, the virtual world displayed by the virtual world screen includes: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.

And 102, selecting a target area on the virtual surface, wherein the target area is a surface area related to the target event on the virtual surface.

The terminal selects a target area on the virtual surface, the target area being a surface area on the virtual surface associated with the target event.

The target area is a portion of the surface area of the virtual surface. For example, a closed graph is drawn on the virtual surface by using a continuous straight line or a curve, and the virtual surface encircled by the graph is the target area. Illustratively, the target area is a continuous surface area on the virtual surface. Illustratively, the target area includes at least one of a flat surface, a sloped surface, and a curved surface.

Illustratively, the target area is a fuzzy area range centered at the location point where the target event occurs. The terminal acquires a fuzzy target area by selecting a position point where the target event occurs. And judging whether the target event can occur on the position point by judging whether the virtual surface in a certain range near the position point meets the occurrence condition of the target event. For example, when the determination method for determining whether the vicinity of the position point meets the occurrence condition is different, the selection range of the target area may be changed.

Illustratively, as shown in fig. 5, there is a target area 602 in the virtual world screen 600, and the target area 602 is composed of a part of the ground and a part of the outer wall of the virtual house.

Illustratively, the target area is a projection area projected downward with an arbitrary plane figure, with a point as a center point and an arbitrary direction as a projection direction, at a certain height above the point, on an arbitrary virtual surface in the virtual world. Using this point as the center point is: the connecting line of the central point of the arbitrary plane figure and the point is parallel to the projection direction.

The target event is some activity that may occur in the virtual world. Illustratively, the target event is an event related to, or controlled by, the master virtual role. Illustratively, a target event is an event associated with a location in the virtual world or an area on the virtual surface. For example, the target event is that other avatars control the master avatar to move to a certain position in the virtual world, or the target event is that the master avatar places a virtual prop at a certain position in the virtual world, or the target event is that the master avatar is about to move to a certain position in the virtual world, or the target event is an airborne descending from the sky within the visual range of the master avatar.

The target event being related to the target area comprises: when the target area meets the occurrence condition, a target event occurs; or, the target area is at least one of locations where the target event occurs. The occurrence condition is a restriction on the target area, and for example, the target area is made of cement, the target area is circular in shape, the target area is a continuous virtual surface, the target area is a virtual surface parallel to a horizontal plane, or the like.

And step 111, when the target area is a gentle area, executing the target event, wherein the smoothness of the gentle area is higher than the target condition.

When the target area is a gentle area, the terminal executes the target event, and the smooth area is flatter than the target condition.

Illustratively, the terminal determines whether the target area is a gentle area.

The plateau regions are surface regions that are flatter than the target condition. Illustratively, flatness is used to describe the extent to which a surface region is proximate to a horizontal surface. Illustratively, flatness is used to describe the degree of relief of a surface region.

The target condition is a criterion of judging whether the target area is a gentle area. For example, the target condition may be that the target area is flat above or below a certain threshold.

Illustratively, the occurrence condition of the target event is that the target area is a gentle area.

In summary, the method provided in this embodiment obtains whether the target area is suitable for executing the target event by obtaining a target area on the virtual surface and determining whether the target area is a smooth area. Whether the target area meets the condition of the target event is determined by integrally judging the whole surface area related to the target event. The problem of in the correlation technique only obtain the normal vector of target area according to a point on the target area, directly place virtual stage property according to this point and normal vector, can't judge whether virtual surface is fit for placing virtual stage property is solved.

Exemplarily, the present application further provides an exemplary embodiment of a method for selecting a target area by a terminal. The application also provides an exemplary embodiment of judging the flatness degree of the target area.

Fig. 7 is a flowchart of a method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application. The execution of the method is illustrated by the terminal shown in fig. 1, and the method at least comprises the following steps.

Step 101, displaying a virtual world picture, wherein the virtual world picture is a picture obtained by observing a virtual world from a view angle of a main control virtual character, the virtual world comprises a virtual surface for the main control virtual character to move on, and the virtual surface comprises at least one of a plane, an inclined plane and a curved surface.

Step 1021, selecting a target area on the virtual surface according to the perspective focus of the master virtual character.

And the terminal selects a target area on the virtual surface according to the visual angle focus of the main control virtual role.

The perspective focus is the focus of the camera shooting the virtual world. Illustratively, the perspective focus is a center point of the virtual world frame. Illustratively, when the virtual world is viewed in a first person, the perspective focus is the visual center of the master virtual character. Illustratively, the camera views the virtual world according to the view point focus, and the master virtual character is positioned on the central axis of the virtual world picture.

Illustratively, the terminal circles a portion of the area on the virtual surface according to the view angle focus, and determines the area as the target area.

For example, fig. 8 is a flowchart of a method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application, which shows a method for selecting a target area on a virtual surface according to a viewing angle focus, where step 1021 may be replaced with the following steps 201 to 205:

step 201, acquiring the position and the shooting direction of a camera, wherein the camera is a camera corresponding to the view angle of the main control virtual character.

The terminal acquires the position and the shooting direction of the camera, and the camera is a camera corresponding to the visual angle of the main control virtual character.

Illustratively, the terminal acquires position coordinates and an imaging direction of a camera. Illustratively, the position of the camera changes as the position of the master avatar changes. Illustratively, the position of the camera is the position of the camera when the master virtual character is at the focal point of the view of the camera. Illustratively, the image capturing direction is a direction vector in which the camera captures the master virtual character, and illustratively, the image capturing direction is a vector pointing from the camera position to the master virtual character position. Illustratively, the image capturing direction is a direction in which the camera lens is directed obliquely downward by 45 ° toward the master virtual character when the master virtual character is located at the focal point of the angle of view of the camera.

In step 202, a second ray is emitted from the position of the camera in the imaging direction.

The terminal emits a second ray in the image pickup direction from the position of the camera.

Illustratively, the terminal emits the second ray in the shooting direction with the position of the camera as a starting point.

Illustratively, the view angle focus in step 1021 may be any point on the second ray.

Illustratively, as shown in fig. 9, a camera 603 is provided above the main control virtual character 601, the terminal emits a second ray 604 from the position of the camera 603 in the imaging direction, and the second ray 604 intersects with the ground 605 at a target intersection 606.

Illustratively, the second ray may be a spherical ray.

A spherical ray is a path that emits a sphere in one direction, starting from a point. For example, a spherical ray has three parameters, a starting point, a sphere radius, and a ray direction, and a spherical ray is a sphere with the starting point as a sphere center and the sphere radius as a radius, and a path of the sphere after the spherical ray is shot from the starting point to the ray direction. The outer surface of the sphere can detect the impact, i.e. the impact result of the spherical ray is: the intersection points of the outer surface of the sphere and other virtual models in the ejection process of the sphere.

Illustratively, as shown in FIG. 10, the center of sphere 607 is at origin 608, radius 609 is a, and the center of sphere 607 reaches second location 611 at a time when it emerges from origin 608 along ray direction 610. The spherical ray is the moving path of the sphere shown by the dashed straight line in fig. 10. When the sphere 607 collides with other virtual models during the movement, the collision is generated, and the collision result is returned.

For example, the second ray is a spherical ray emitted in the imaging direction with the position of the camera as a starting point and with any length as a radius. The target intersection is the intersection of the spherical ray with the virtual model in the virtual world.

Step 203, a target intersection of the second ray and the virtual surface is obtained.

And the terminal acquires a target intersection point of the second ray and the virtual surface.

Illustratively, the target intersection is the first point of impact that the second ray produces with the virtual model in the virtual world. Illustratively, the target intersection is the first point of impact that the second ray would have with other virtual models in the virtual world except the virtual character. I.e., the second ray does not impinge on the virtual character model. Illustratively, the target intersection is the point of impact of the second ray in the virtual world with the first virtual model that may be impacted. The virtual models include two types, a virtual model that can be impacted and a virtual model that cannot be impacted. Illustratively, part of the virtual model in the virtual world does not have the function of receiving the ray-detected impact. E.g., grass on the ground, leaves on the tree model, etc., the second ray does not impinge on this portion of the virtual model. Illustratively, the target intersection may be a point on any virtual surface in the virtual world.

Illustratively, after the second ray hits the virtual model, the hit result is returned to the terminal. The impact result comprises at least one piece of impact information of the coordinates of the impact point, the normal vector of the plane of the impact point and the material of the plane of the impact point.

Illustratively, as shown in FIG. 9, the terminal obtains the coordinates of the target intersection point 606.

Step 204, the target intersection point is determined as the first location point.

The terminal determines the target intersection point as a first location point.

Illustratively, the first location point is any point in the target area.

In step 205, the virtual surface near the first location point is determined as the target area.

The terminal determines a virtual surface in the vicinity of the first position point as a target area.

For example, the terminal determines the first position point as the occurrence position of the target event, and the terminal determines whether the virtual surface area near the first position point meets the occurrence condition of the target event.

For example, the target area may be obtained by: the first position point is taken as the sphere center, a certain length is taken as the radius to be taken as the sphere, the outer surface of the sphere and the virtual surface in the virtual world generate a closed intersecting line, and the virtual surface in the intersecting line range is the target area.

Illustratively, the terminal may not acquire a clear boundary of the target area.

Step 103, acquiring any two points on the target area, and determining a feature vector of the target area according to the any two points, wherein the feature vector is used for representing the flatness of the target area.

The terminal acquires any two points on the target area and determines the feature vector of the target area according to the two points.

For example, whether the target area is flat or not can be determined by judging whether the included angle between the target area and the horizontal plane is small enough or not. If the included angle between the target area and the horizontal plane is smaller, the target area is a gentle area; if the included angle between the target area and the horizontal plane is too large, the target area is not a gentle area.

Illustratively, the degree of tilt of the target region is characterized by a feature vector. The terminal acquires any two points in the target area, and the two points form a feature vector of the target area. If the included angle between the characteristic vector and the horizontal plane is too large, the inclination degree of the target area is too large; if the included angle between the feature vector and the horizontal plane is small, the inclination degree of the target area is small.

Illustratively, when the target area is a plane or a slope, the flatness is used to describe the angle of the target area relative to the horizontal. When the target area is a surface area combined by at least one of a plane, an inclined plane, and a curved plane, and the flatness is used to describe an angle between the entire target area and a horizontal plane, for example, since the target area is a combined plane of a plurality of planes, the target area is replaced by the target plane, and the target plane may minimize the sum of distances from each point on the target area to the target plane. For example, there are three points on the target area, the distances from the three points to the target plane are a, b, and c, respectively, and the target plane should be the plane that minimizes the sum of the distances from the three points to the target plane, i.e., a + b + c. If the included angle between the target plane and the horizontal plane is small, the flatness of the target area is good, and if the included angle between the target plane and the horizontal plane is large, the target area is not flat. The target plane is most representative of the target area tilt trend. For example, since the calculation amount of the target plane is too large, the degree of inclination of the target plane may be approximated by the feature vector.

Illustratively, the feature vector is a vector located on the target plane.

Illustratively, random errors exist between any two points on the target area, and the flatness of the target area represented by the feature vector determined by the any two points is not accurate enough, so the application also provides a method for determining the feature vector.

For example, fig. 11 shows a method for determining a feature vector, where the method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application is a flowchart, and step 103 may be replaced with the following steps 1031 to 1035:

step 1031, acquiring a first position point on the target area, wherein the first position point is any point on the target area.

The terminal acquires a first position point on the target area, wherein the first position point is any point on the target area.

Illustratively, the first location point may be obtained through steps 201 to 205 in the exemplary embodiment shown in fig. 8.

Step 1032, a first grid ray is emitted to the target area from a second position point right above the first position point, wherein the first grid ray comprises at least two parallel rays of which the starting points are located around the second position point and are vertical to the horizontal plane.

The terminal emits a first grid ray from a second location point directly above the first location point toward the target area.

The grid rays are a way of performing ray detection and are used for acquiring the return collision result of the collision point. The grid rays comprise at least two parallel rays, and the grid rays can acquire a plurality of collision points in sequence. Illustratively, the ray directions of the checkered rays are the same. Illustratively, different ones of the tessellated rays have different origins. Illustratively, the starting points of the checkered rays lie in the same plane. Illustratively, the starting points of the grid rays are evenly distributed on a regular plane. Illustratively, the ray direction of the checkered rays is perpendicular to the regular plane in which the origin lies.

For example, as shown in FIG. 12, a checkered ray consists of nine parallel rays 612. The origin 613 of the nine parallel rays 612 is evenly distributed over a square 614. Nine parallel rays 612 are emitted from nine origins 613, respectively, in a direction perpendicular to the square 614, and hit the ground 605, resulting in nine intersections and returning nine hit results.

Illustratively, at least two parallel rays of the checkered rays are emitted separately in sequence. For example, when the previous ray acquired the impact result, or the length of the ray reaches the maximum length, the next ray is shot from the starting point. Illustratively, the next ray is shot from the starting point at an interval after the previous ray is shot.

For example, at least two parallel rays of the checkered rays may also be shot simultaneously, i.e., all rays are shot from the starting point at the same time. For example, at least two parallel rays in the grid rays may be shot in a group. The present application does not limit the emission manner of the grid beam.

Illustratively, the parallel rays emerging from the checkered rays are spherical rays. That is, each of the at least two parallel rays is a spherical ray. That is, the grid ray has several starting points and several spheres, and a plurality of spheres are emitted from the plurality of starting points in the same ray direction. Illustratively, the radii of the plurality of spheres may be the same or different.

Illustratively, as shown in FIG. 13, a checkered ray is nine spherical rays with origin points evenly distributed on square 614. The nine spherical rays are sequentially emitted along the direction perpendicular to the plane of the square 614, and when the previous spherical ray obtains the impact result, the next spherical ray is emitted. As shown in fig. 13, when the first spherical ray 615 is emitted to obtain the impact result, the second spherical ray 616 is emitted from the starting point, and the third and fourth spherical rays are sequentially emitted until the ninth spherical ray is emitted to obtain the impact result, and one-time ray detection of the checkered rays is completed.

The terminal emits a first grid ray from a second location point directly above the first location point toward the target area. Directly above is the direction perpendicular to the horizontal plane, i.e. pointing towards the sky in the virtual world.

Illustratively, a line connecting the first location point and the second location point is perpendicular to the horizontal plane. For example, if the coordinate axis in the virtual world is a y-axis in the vertical direction, after the terminal acquires the first position point, the terminal only needs to add a preset numerical value to the y-axis coordinate of the first position point, and then the coordinate of the second position point located right above the first position point can be acquired. Illustratively, the distance between the first location point and the second location point may be arbitrary. For example, the distance between the first location point and the second location point may be the distance from the highest point in the virtual world to the horizontal plane.

For example, the terminal may determine the second position point as the starting point of any one of the parallel rays according to the manner of shooting the first grid ray by the second position point. For example, the terminal emits the first grid ray according to the second position point by determining the center point of the plane where the starting point of the first grid ray is located by the second position point.

The first grid ray is a grid ray with the plane of origin parallel to the horizontal plane. Illustratively, the first grid ray is located directly above the first location point.

Illustratively, as shown in FIG. 14, there is a target area 602 on the outer surface of one of the ramp models, and there is a first location point 618 on the target area 602. The terminal acquires a second position point 619 right above the first position point, and downwards emits a first square ray 620 by taking the second position point as the central point of the plane where the starting point of the first square ray is located.

In step 1033, a first impact result of the first grid ray with the target area is obtained, where the first impact result includes at least two impact points.

The terminal obtains a first impact result of the first grid ray and the target area, wherein the first impact result comprises at least two impact points.

The impact result is the intersection point, the virtual model information and the like acquired by the ray after the ray in the ray detection is intersected with the virtual model or the virtual surface in the virtual world. The virtual model information includes: normal vector of the plane where the intersection point is located, material of the virtual model, and the like.

The point of impact is the intersection of the ray with the virtual model or virtual surface.

Exemplary rays in the radiation detection include spherical rays, checkered rays, parallel rays, first rays, second rays, and the like, which are mentioned in the present application.

Illustratively, the terminal obtains a first result of an impact of a first grid ray with the target area. The first impact result is at least two points of impact of at least two parallel rays of the first grid of rays with the virtual surface. Illustratively, the impact point of the first impact result is not necessarily located within the target area, but is necessarily located near the first location point.

Illustratively, as shown in fig. 14, the first grid of rays 620 includes four parallel rays, and the terminal acquires four impact points of the four parallel rays with the outer surface of the ramp model.

Illustratively, the impact result obtained by the terminal further includes the distance from the starting point to the impact point of the ray.

I.e. the length of the radiation emitted.

At step 1034, the highest impact point in the first impact result is determined as the first impact point, and the lowest impact point in the first impact result is determined as the second impact point.

The terminal determines the highest impact point in the first impact result as a first impact point and determines the lowest impact point in the first impact result as a second impact point.

Illustratively, the highest and lowest in step 1304 are the heights of the impact points in the vertical direction. The vertical direction is a direction perpendicular to the horizontal plane.

Illustratively, after acquiring at least two impact points, the terminal determines a height difference of the impact points in the vertical direction. Illustratively, the terminal may pass the coordinates of the impact point in the vertical direction. To obtain the highest impact point and the lowest impact point of the at least two impact points.

For example, the terminal can also judge the height of the impact point in the vertical direction by judging the emitting length of the ray corresponding to the impact point. For example, the terminal determines the impact point with the minimum ejection length as the highest impact point and determines the impact point with the maximum ejection length as the lowest impact point.

Illustratively, when the highest impact point or the lowest impact point is more than one, the terminal takes any one of the plurality of highest impact points as the first impact point and takes any one of the plurality of lowest impact points as the second impact point. Illustratively, when the heights of all impact points are the same, the terminal selects two impact points from the first impact result as the first impact point and the second impact point.

In step 1035, a feature vector of the target area is determined from the first impact point and the second impact point.

And the terminal determines the characteristic vector of the target area according to the first impact point and the second impact point.

Illustratively, the terminal determines a direction vector pointing from the first impact point to the second impact point, or a direction vector pointing from the second impact point to the first impact point, as the feature vector of the target region.

Illustratively, as shown in fig. 14, the terminal determines the impact point 621 as a first impact point and determines the impact point 622 as a second impact point from the impact results of the first cube ray 620, and makes a feature vector 623 pointing from the first impact point to the second impact point.

Step 1001, determining whether an included angle between the feature vector and the horizontal plane is smaller than a first threshold.

And the terminal judges whether the included angle between the characteristic vector and the horizontal plane is smaller than a first threshold value. If the included angle is smaller than the first threshold, go to step 108, otherwise go to step 1082.

And step 108, when the included angle between the characteristic vector and the horizontal plane is smaller than a first threshold value, determining that the target area is a gentle area.

And when the included angle between the characteristic vector and the horizontal plane is smaller than a first threshold value, the terminal determines that the target area is a gentle area.

Illustratively, the terminal calculates the angle of the feature vector to the horizontal plane. When the included angle is smaller than the first threshold value, the target area is a gentle area.

Illustratively, the first threshold is an angle-dependent value. Illustratively, the first threshold may be arbitrary, for example, the first threshold may be any one of 90 °, 60 °, and 59 °.

For example, a normal vector of the target region may be obtained according to the feature vector, and the inclination degree of the target region may be determined by using an included angle between the normal vector and the vertical direction. A normal vector is a vector perpendicular to the feature vector. Illustratively, the normal vector approximates a vector perpendicular to the target plane.

Illustratively, the terminal cross-multiplies the feature vector and the unit vector in the vertical direction to obtain a first vector, i.e., the first vector is a vector product of the feature vector and the unit vector in the vertical direction. The first vector is perpendicular to the plane in which the feature vector and the unit vector in the vertical direction lie. The terminal then cross-multiplies the feature vector with the first vector to obtain a second vector, namely the second vector is the vector product of the first vector and the feature vector. The second vector is perpendicular to the plane in which the feature vector and the first vector lie. The terminal determines the second vector as a normal vector of the target area.

Illustratively, the direction of the second vector is upward, i.e., the angle between the second vector and the vertical direction is less than or equal to 90 °.

For example, as shown in fig. 15, step 1305 acquires a feature vector 623 of the target region 602, and the terminal cross-multiplies the feature vector 623 and a unit vector 624 in the vertical direction to obtain a first vector 625, wherein the first vector 625 is perpendicular to a plane in which the feature vector and the unit vector in the vertical direction are located. The terminal then cross-multiplies the first vector 625 with the feature vector 623 to obtain a second vector 626, which is directed upwards perpendicular to the target area 602. The second vector 606 is a normal vector to the target area 602. The terminal determines the degree of tilt of the target area by determining the angle between the second vector 606 and the vertical direction unit vector 624.

For example, the smaller the angle between the normal vector of the target area and the vertical direction is, the gentler the target area is, and the larger the angle between the normal vector of the target area and the vertical direction is, the steeper the target area is.

And step 111, when the target area is a gentle area, executing the target event, wherein the smoothness of the gentle area is higher than the target condition.

At step 1082, the target event cannot be executed.

And the terminal judges that the target area does not accord with the occurrence condition of the target event and cannot execute the target event.

In summary, in the method provided in this embodiment, the target area is selected on the virtual surface according to the view focus of the main control virtual character. By acquiring a collision point of a ray with the virtual surface with a spherical ray in the position and the imaging direction of the camera, the collision point is determined as a first position point, and a virtual surface region in the vicinity thereof is determined as a target region with the first position point as a center. A method for acquiring a target area is provided, so that the acquisition of the target area is associated with a main control virtual role, and a terminal can conveniently acquire the target area according to the position of the main control virtual role.

The spherical ray is used for ray detection, so that the ray detection is not limited to one point of the head of the ray, the space range of the ray detection is expanded, the problem that when the linear ray just enters a hole with a smaller diameter in the virtual surface, an effective collision point on the virtual surface cannot be accurately obtained is solved, and the error rate of the ray detection is reduced.

The flatness degree of the target area is judged by acquiring the characteristic vector of the target area and utilizing the characteristic vector, so that whether the target area is a gentle area or not is judged. The terminal can accurately acquire the inclination degree of the target area according to the characteristic vector, so that the quantitative calculation and the description of the flatness degree of the target area are facilitated, and whether the target area is a surface area meeting the occurrence condition of the target event or not is determined.

For example, whether the target area is flat or not may be determined according to a height difference of the target area.

Fig. 16 is a flowchart of a method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application. The execution of the method is illustrated by the terminal shown in fig. 1, and the method at least comprises the following steps.

Step 101, displaying a virtual world picture, wherein the virtual world picture is a picture obtained by observing a virtual world from a view angle of a main control virtual character, the virtual world comprises a virtual surface for the main control virtual character to move on, and the virtual surface comprises at least one of a plane, an inclined plane and a curved surface.

And 102, selecting a target area on the virtual surface, wherein the target area is a surface area related to the target event on the virtual surface.

Step 103, acquiring any two points on the target area, and determining a feature vector of the target area according to the any two points, wherein the feature vector is used for representing the flatness of the target area.

Step 1001, determining whether an included angle between the feature vector and the horizontal plane is smaller than a first threshold.

The terminal judges whether the included angle between the feature vector and the horizontal plane is smaller than a first threshold value,

step 1081, when an included angle between the feature vector and the horizontal plane is smaller than a first threshold, and a height difference of the target area is smaller than a second threshold, determining that the target area is a gentle area, and the height difference is used for representing a fluctuation degree of the target area.

And when the included angle between the characteristic vector and the horizontal plane is smaller than a first threshold value and the height difference of the target area is smaller than a second threshold value, the terminal determines that the target area is a gentle area.

The height difference is used for describing the fluctuation degree of the target area in the normal vector direction of the target area. Illustratively, the normal vector of the target region can be obtained according to the example under step 108 in the embodiment shown in fig. 7.

Illustratively, the normal vector of the target area is obtained by the terminal through twice cross multiplication according to the feature vector and the unit vector in the vertical direction.

Illustratively, the second threshold is a numerical value describing the altitude. The second threshold may be any value. For example, the second threshold is one unit height, or 2 centimeters.

Illustratively, the height difference of the target area may be calculated by the method in the exemplary embodiment shown in fig. 17. Illustratively, the following steps 1031 to 107 and 1002 are also included before the step 1081.

Step 1031, acquiring a first position point on the target area, wherein the first position point is any point on the target area.

104, emitting a second grid ray to the target area from a third position point above the first position point along the direction vertical to the characteristic vector; the line connecting the third position point and the first position point is vertical to the characteristic vector, and the second grid ray comprises at least two parallel rays which have starting points positioned around the third position point and are emitted to the target area.

The terminal emits a second grid ray from a third location point above the first location point toward the target area in a direction perpendicular to the eigenvector.

Illustratively, the third location point may be any point in space above the first location point. Illustratively, the third location point is a point of the first location point in the normal vector direction of the target area.

Illustratively, the direction perpendicular to the feature vector is the normal vector direction of the target area, or the direction perpendicular to the feature vector is the opposite direction of the normal vector direction of the target area.

For example, as shown in fig. 18, on the target area 602, the feature vector 623 and the normal vector 624 of the target area 602 are determined, the terminal acquires a third position point 627 above the first position point 618 according to the first position point 618 and the normal vector 624, and emits a second checkered ray 628 to the target area 602 along a direction opposite to the normal vector 624 according to the third position point 627.

Illustratively, at least two parallel rays of the second checkered rays are spherical rays. For example, the second grid ray may be emitted in a manner that is referenced to the first grid ray.

And 105, acquiring a second impact result of the second grid rays and the target area, wherein the second impact result comprises at least two impact points.

And the terminal acquires a second impact result of the second grid rays and the target area, wherein the second impact result comprises at least two impact points.

And 106, determining the highest impact point in the second impact result as a third impact point, and determining the lowest impact point in the second impact result as a fourth impact point.

The terminal determines the highest impact point in the second impact result as a third impact point and determines the lowest impact point in the second impact result as a fourth impact point.

Illustratively, the highest and lowest in step 106 refer to the highest or lowest height of the impact point in the direction of the directional quantity of the target area. Illustratively, the highest and lowest in step 106 refer to the highest or lowest distance of the impact point from the target plane.

Illustratively, the second impact result obtained by the terminal further includes an exit length of each ray in the second grid of rays, where the exit length is a distance from the starting point to the impact point of the ray. Illustratively, the terminal determines the highest impact point and the lowest impact point according to the ejection length corresponding to each impact point in the second impact result. When the shot length corresponding to the impact point is the shortest, the impact point is the highest impact point, i.e., the third impact point. When the shot length corresponding to the impact point is the longest, the impact point is the lowest impact point, i.e., the fourth impact point.

Step 107, calculating the height difference of the target area according to the third impact point and the fourth impact point.

The terminal calculates a height difference of the target area according to the third impact point and the fourth impact point.

The height difference is a height difference of the third impact point and the fourth impact point in the normal vector direction of the target area. Illustratively, the height difference is a sum of distances of the third impact point and the fourth impact point, respectively, to the target plane.

For example, the terminal may determine the height difference based on the injection length corresponding to the third impact point and the fourth impact point. For example, the height difference is equal to the ejection length corresponding to the fourth impact point minus the ejection length corresponding to the third impact point.

For example, the impact result of the second grid rays comprises a point A and a point B, the point A rays corresponding to the point A impact the ground after emitting a distance a from the starting point, and the point A of the impact point is obtained; and B rays corresponding to the B point are shot from the starting point by a distance B and then impact the ground, and the B point of the impact point is obtained. Wherein a is less than b, point a is the highest impact point, i.e., the third impact point; point B is the lowest point of impact, i.e., the fourth point of impact. The height difference of the target area is equal to b-a.

Step 1002, determine whether the height difference is less than a second threshold.

And the terminal judges whether the height difference is smaller than a second threshold value, if so, the step 1081 is carried out, and otherwise, the step 1082 is carried out.

And step 111, when the target area is a gentle area, executing the target event, wherein the smoothness of the gentle area is higher than the target condition.

Illustratively, the target event may be the placement of a virtual prop. Step 111 may be replaced with step 1112 in the exemplary embodiment shown in fig. 17.

And 1112, placing the virtual prop on the target area when the target area is a gentle area.

When the target area is a gentle area, the terminal places the virtual prop on the target area.

For example, as shown in fig. 19, the terminal selects a part of the virtual surface on the hill 629, and determines the part of the virtual surface as the target area. After the terminal determines that the target area is a gentle area, the virtual prop 701 is placed on the target area.

At step 1082, the target event cannot be executed.

In summary, in the method provided in this embodiment, the normal vector of the target area is obtained through the feature vector and the unit vector in the vertical upward direction, the second grid ray is emitted according to the normal vector of the target area and the first position point, and the fluctuation degree of the target area is obtained according to the impact result of the second grid ray, so as to determine whether the target area is smooth. The method for judging whether the virtual surface is smooth or not on the complex virtual surface is provided, the flatness and the height difference of the virtual surface can be accurately acquired, and whether the virtual surface is smooth or not can be judged from multiple dimensions. The problem of in the correlation technique only obtain the normal vector of target area according to a point on the target area, directly place virtual stage property according to this point and normal vector, can't judge whether virtual surface is fit for placing virtual stage property is solved.

Exemplary embodiments of determining whether a virtual surface is open are also presented.

Fig. 20 is a flowchart of a method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application. Taking the execution subject of the method as an example for the terminal shown in fig. 1, the difference from the exemplary embodiment shown in fig. 4 is that step 1111 is used instead of step 111, and step 1003 and step 1004 are added before step 111.

In step 1003, it is determined whether the target area is a gentle area.

The terminal determines whether the target area is a gentle area. If yes, go to step 1004, otherwise go to step 1082.

In step 1004, it is determined whether the target area is an open area.

The terminal determines whether the target area is an open area. If yes, go to step 1111, otherwise go to step 1082.

And 1111, when the target area is a smooth area and the target area is an open area, executing the target event, wherein the open area is a surface area which is not shielded by the obstacles and is above the virtual surface.

When the target area is a gentle area and the target area is an open area, the terminal performs the target event, and the open area is a surface area above the virtual surface that is not blocked by the obstacle.

For example, the occurrence condition of the target event may be: there is no occlusion above the target area.

Illustratively, an open area is an unobstructed surface area above a virtual surface. Illustratively, an open area is a surface area above a virtual surface that has no obstructions. Illustratively, an open area is a space at a certain height above a virtual surface area, without other virtual models, virtual surfaces.

An obstacle is any virtual model in the virtual world. Illustratively, the obstacles include: at least one of a building model, an animal and plant model, a virtual prop model and a virtual article model.

For example, the present application also provides a method for a terminal to detect whether a target area is an open area, as shown in fig. 21, the target area may be determined whether the target area is an open area by using steps 1031 to 110.

Step 1031, acquiring a first position point on the target area, wherein the first position point is any point on the target area.

A first ray is shot 109 from a first location point upward in a direction perpendicular to the horizontal plane.

The terminal emits a first ray upward from a first location point in a direction perpendicular to a horizontal plane.

The first ray is a ray for detecting whether an obstacle exists above the target area.

Illustratively, the first ray has a fixed length, and when the length of the first ray reaches the fixed length and the collision result has not been acquired, it is determined that there is no obstacle above the first position point, that is, there is no obstacle above the target area. For example, the fixed length of the first ray is three unit distances, and when the first ray reaches a point three unit distances from the first position point directly above the first position point, it is determined that there is no obstacle above the target area.

Illustratively, the first ray may be a spherical ray. For example, a spherical ray with a larger sphere radius may be used to enlarge a spatial range detected by the first ray to determine whether an obstacle blocks above the target area.

Step 1005, determining whether the first ray obtains the impact result.

And the terminal judges whether the first ray obtains the impact result. If the impact result is obtained, go to step 110, otherwise go to step 1082.

When the first ray does not acquire the impact result, the target area is determined to be an open area, step 110.

When the first ray does not acquire the impact result, the terminal determines that the target area is an open area.

For example, when the first ray does not obtain the impact result above the first location point, the first ray is shot again in the vertical downward direction with the end point of the first ray as the starting point, and when the first ray can return to the first location point, it is indicated that there is no obstacle between the first location point and the end point of the first ray, that is, the target area is an open area.

For example, when the target event is to place a prop, step 1111 may be replaced by step 1113 in the exemplary embodiment shown in fig. 21.

And 1113, when the target area is a gentle area and the target area is an open area, placing a virtual prop on the target area, wherein the open area is a surface area which is not shielded by the barrier above the virtual surface.

When the target area is a gentle area and the target area is an open area, the terminal places the virtual props on the target area, and the open area is a surface area without barriers above the virtual surface.

The virtual prop is a virtual prop which needs a gentle and open area for placement. For example, the virtual prop may be a prop which is placed to launch a virtual object upwards, or a prop which is placed to call for landing in a target area.

For example, as shown in fig. 22, the target area selected by the terminal is located below the bridge 702, and after the terminal emits the first ray upwards from the first position point, the first ray collides with the lower surface of the bridge 702. The terminal determines that the upper side of the target area is not an open area and displays the virtual prop 701 in an placeable state.

At step 1082, the target event cannot be executed.

In summary, in the method provided in this embodiment, the first ray is emitted from the first position point in the vertical upward direction, so as to detect whether there is a blockage of the obstacle above the target area. When the target event sets occurrence conditions for the space above the target area, whether the target area meets the occurrence conditions of the target event is determined by judging whether the first ray can obtain the impact result. By setting the first ray as a spherical ray, the detection range of the first ray in space is expanded, and the detection error is reduced.

For example, when the target event is to place a virtual prop, the application also provides an exemplary embodiment of placing the virtual prop.

Fig. 23 is a flowchart of a method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application. The execution of the method is illustrated by the terminal shown in fig. 1, and the method includes the following steps.

Step 101, displaying a virtual world picture, wherein the virtual world picture is a picture obtained by observing a virtual world from a view angle of a main control virtual character, the virtual world comprises a virtual surface for the main control virtual character to move on, and the virtual surface comprises at least one of a plane, an inclined plane and a curved surface.

And 102, selecting a target area on the virtual surface, wherein the target area is a surface area related to the target event on the virtual surface.

In step 1003, it is determined whether the target area is a gentle area.

The terminal determines whether the target area is a gentle area. If the area is a gentle area, the step 301 is performed, otherwise, the step 1082 is performed.

Step 301, when the target area is a gentle area, determining the placement direction of the virtual prop according to the feature vector of the target area.

The terminal determines whether the target area is a gentle area. And when the target area is a gentle area, the terminal determines the placement direction of the virtual prop according to the characteristic vector of the target area.

For example, the method for the terminal to determine that the target areas are all gentle areas may refer to the exemplary embodiments provided in fig. 7 or fig. 16.

Illustratively, the terminal finds a normal vector of the target area according to the feature vector, and determines the placement direction of the virtual prop according to the normal vector of the target area.

The placing direction of the virtual prop comprises a vertical direction and a horizontal direction, and exemplarily, the vertical direction of the virtual prop is determined by the terminal according to a normal vector of the target area. For example, the horizontal direction in which the virtual prop is placed may be arbitrary.

Exemplarily, the terminal places the bottom surface of the virtual prop on the feature vector, and determines the vertical direction of the virtual prop according to the normal vector of the target area. Illustratively, the terminal locates the feature vector on the bottom surface of the virtual prop, and the normal vector of the target area is perpendicular to the bottom surface of the virtual prop.

For example, as shown in fig. 24, the target area acquired by the terminal is an outer wall surface of the house 703, the terminal acquires a feature vector according to the impact result of the first square ray, acquires a normal vector of the target area according to the feature vector and a unit vector in the vertical direction, and determines the vertical direction in which the virtual prop is placed according to the normal vector, that is, as shown in fig. 24, the normal vector of the target area is a direction perpendicular to the outer wall surface of the house 703 outward, and then the vertical direction of the virtual prop 701 is perpendicular to the outer wall surface outward.

Illustratively, when the terminal acquires the feature vector of the target area and acquires the normal vector according to the feature vector, the virtual prop is displayed in the virtual screen according to the direction of the normal vector no matter whether the target area is a gentle area or not. For example, as shown in fig. 24, the target area is an outer wall surface of a house 703, and is not a smooth area, but does not affect the virtual item to be displayed on the target area according to the normal vector direction of the target area. Illustratively, the virtual items are displayed in an undeplaceable state.

In step 1004, it is determined whether the target area is an open area.

The terminal determines whether the target area is an open area. If it is an open area, go to step 302, otherwise go to step 1082.

Step 302, when the target area is an open area, displaying the virtual prop in a placeable state.

When the target area is an open area, the terminal displays the virtual prop in a placeable state.

The placeable state and the placeable state are prompt information displayed on the virtual world screen. The placeable state and the placeable state are displayed according to whether the target area meets the occurrence condition of the target event. Illustratively, when the target area is flat and open, the virtual prop is displayed in a placeable state; otherwise, the virtual prop is displayed in an unsettable state.

Illustratively, such prompt information may be displayed on the virtual world screen in various prompt manners. For example, the virtual prop in the placeable state is displayed as blue, and the virtual prop in the placeable state is displayed as red; highlight displaying the outline of the virtual prop in a placeable state, and carrying out gray level processing on the color of the virtual prop in a non-placeable state; displaying prompt information of 'prop placeable' above the virtual prop in a placeable state, and displaying prompt information of 'prop placeable' above the virtual prop in a non-placeable state; the virtual prop in the placeable state is displayed on the virtual world picture, and the virtual prop in the unsettable state is not displayed on the virtual world picture, and the like.

For example, as shown in fig. 22 or 24, virtual prop 701 is displayed in a non-placeable state. Virtual props are displayed in placeable states as in fig. 5 or 19.

Step 303, when receiving the placing operation, placing the virtual prop on the target area according to the placing direction.

And when receiving the placing operation, the terminal places the virtual prop on the target area according to the placing direction.

For example, after determining that the target area meets the placement condition of the virtual item, the terminal displays the virtual item in a placeable state on the virtual world screen, and if the user wants to place the virtual item at the current position, the user can perform placement operation.

For example, the placing operation may be a triggering operation of the virtual item placing control by the user, where the triggering operation includes: at least one of click, double click, long press, and slide. Illustratively, the virtual item placement control is a UI control displayed on top of the virtual world screen.

At step 1082, the target event cannot be executed.

The terminal cannot place the virtual prop on the target area.

In summary, the method provided in this embodiment determines whether the virtual prop can be prevented in the target area by determining whether the target area is a gentle and wide area, thereby solving the problem in the related art that the normal vector of the target area is obtained only according to one point on the target area, and the virtual prop cannot be placed on the virtual surface according to the point and the normal vector, which makes it impossible to determine whether the virtual surface is suitable for placing the virtual prop.

By way of example, the present application also provides a complete exemplary embodiment of the virtual surface judgment method in the application virtual world.

Fig. 25 is a flowchart of a method for determining a virtual surface in a virtual world according to an exemplary embodiment of the present application. The execution of the method is illustrated by the terminal shown in fig. 1, and the method includes the following steps.

Step 801, calculate a position point in front with spherical rays.

The terminal calculates a position point in front with a spherical ray.

Illustratively, the terminal computes a first location point in front of the master avatar using a spherical ray.

From this position point, it is calculated whether there is an impact point, step 802, vertically up and down, respectively.

From this point, the terminal calculates, vertically upwards and downwards, whether there is a point of impact.

Illustratively, the terminal emits the first ray from the first position point to the vertically upward direction, and when the first ray reaches the preset emission length, the terminal emits the first ray again from the end point vertically downward to return to the first position point. If the first ray has a point of impact, then the target area is not an open area, otherwise, the target area is an open area.

And 803, calculating the direction of the prop against the ground through the grid rays.

And the terminal calculates the direction of the prop against the ground through the grid rays.

Illustratively, the terminal obtains a feature vector according to the impact result of the first square ray, obtains a normal vector of the target area according to the feature vector and a unit vector in the vertical direction, and determines the direction of the virtual prop attached to the ground according to the normal vector of the target area and the feature vector, namely the bottom surface of the virtual prop is parallel to the feature vector, and the vertical direction of the virtual prop is parallel to the normal vector direction.

An angle is calculated from this direction and the vertical direction to eliminate large slopes, step 804.

The terminal calculates the angle by this direction and the vertical direction to exclude large slopes.

Illustratively, the terminal determines the inclination degree of the target area by judging the included angle between the normal vector of the target area and the vertical direction. If the included angle is too large, the inclination degree of the target area is too large, and the target area is not suitable for placing the virtual prop.

In step 805, the height difference is calculated by the grid rays to determine whether the terrain is flat.

The terminal calculates the height difference through the grid rays to judge whether the terrain is flat.

Illustratively, the terminal emits a second grid ray along the normal vector direction of the target area, calculates the height difference of the target area in the normal vector direction according to the impact result of the second grid ray, and judges the fluctuation degree of the target area according to the height difference. If the height difference is too large, the target area is uneven and has large fluctuation and is not a smooth area; otherwise, the target area is determined to be a flat area.

For example, it can be seen from this embodiment that the order of determining whether the target area is a gentle area or an open area may be arbitrary.

In summary, in the method provided in this embodiment, first a spherical ray is used to obtain a first position point, then whether a barrier is located above the first position point to block the first position point is determined to determine whether a target area is an open area, then a placing direction of a virtual prop and an inclination degree of the target area are obtained according to an impact result of the first grid ray, and then a height difference of the target area is obtained according to an impact result of the second grid ray, so as to determine whether the target area is suitable for placing the virtual prop. The problem of in the correlation technique only obtain the normal vector of target area according to a point on the target area, directly place virtual stage property according to this point and normal vector, can't judge whether virtual surface is fit for placing virtual stage property is solved.

In the following, embodiments of the apparatus of the present application are referred to, and for details not described in detail in the embodiments of the apparatus, the above-described embodiments of the method can be referred to.

Fig. 26 is a block diagram of a virtual surface determination apparatus in a virtual world according to an exemplary embodiment of the present application. The device is applied to a terminal, wherein an application program supporting the virtual world runs in the terminal, and the device comprises:

a display module 902, configured to display a virtual world picture, where the virtual world picture is a picture obtained by observing the virtual world from a perspective of a master virtual character, the virtual world includes a virtual surface on which the master virtual character moves, and the virtual surface includes at least one of a plane, an inclined plane, and a curved surface;

a selecting module 901, configured to select a target region on the virtual surface, where the target region is a surface region on the virtual surface related to a target event;

an executing module 903, configured to execute the target event when the target area is a smooth area, where a flatness degree of the smooth area is higher than a target condition.

In an optional embodiment, the apparatus further comprises: an acquisition module 905 and a determination module 907;

the obtaining module 905 is configured to obtain any two points on the target area;

the determining module 907 is configured to determine a feature vector of the target area according to the two arbitrary points, where the feature vector is used to represent a flatness degree of the target area;

the determining module 907 is further configured to determine that the target area is the gentle area when an included angle between the feature vector and a horizontal plane is smaller than a first threshold.

In an optional embodiment, the apparatus further comprises: a transmitting module 906;

the obtaining module 905 is further configured to obtain a first position point on the target area, where the first position point is any point on the target area;

the emitting module 906, configured to emit a first grid ray to the target region from a second location point directly above the first location point, where the first grid ray includes at least two parallel rays with starting points located around the second location point and perpendicular to a horizontal plane;

the obtaining module 905 is further configured to obtain a first impact result of the first lattice ray with the target region, where the first impact result includes at least two impact points;

the determining module 907 is further configured to determine a highest impact point in the first impact result as a first impact point, and determine a lowest impact point in the first impact result as a second impact point;

the determining module 907 is further configured to determine a feature vector of the target area according to the first impact point and the second impact point.

In an optional embodiment, the determining module 907 is further configured to determine that the target area is the gentle area when an included angle between the feature vector and a horizontal plane is smaller than a first threshold and a height difference of the target area is smaller than a second threshold, where the height difference is used to represent a degree of undulation of the target area.

In an optional embodiment, the apparatus further comprises: a transmitting module 906 and a computing module 908;

the obtaining module 905 is further configured to obtain a first position point on the target area, where the first position point is any point on the target area;

the emitting module 906 is configured to emit a second checkered ray to the target region from a third position point above the first position point along a direction perpendicular to the feature vector; the connecting line of the third position point and the first position point is perpendicular to the characteristic vector, and the second grid ray comprises at least two parallel rays which have starting points positioned around the third position point and are emitted to a target area;

the obtaining module 905 is further configured to obtain a second impact result of the second grid ray and the target region, where the second impact result includes at least two impact points;

the determining module 907 is further configured to determine a highest impact point in the second impact result as a third impact point, and determine a lowest impact point in the second impact result as a fourth impact point;

the calculating module 908 is configured to calculate a height difference of the target area according to the third impact point and the fourth impact point.

In an alternative embodiment, the target event is the placement of a virtual prop on the target area;

the executing module 903 is further configured to place the virtual prop on the target area when the target area is a gentle area.

In an optional embodiment, the executing module 903 is further configured to execute the target event when the target area is a flat area and the target area is an open area, where the open area is a surface area without obstacles above the virtual surface.

In an optional embodiment, the apparatus further comprises: an acquisition module 905, a determination module 907, and a transmission module 906;

the obtaining module 905 is configured to obtain a first position point on the target area, where the first position point is any point on the target area;

the emitting module 906, configured to emit a first ray upwards from the first location point along a direction perpendicular to a horizontal plane;

the determining module 907 is configured to determine that the target area is the open area when the first ray does not obtain the collision result.

In an alternative embodiment, the target event is the placement of a virtual prop on the target area;

the execution module 903 is further configured to place the virtual prop on the target area when the target area is a gentle area and the target area is an open area, where the open area is a surface area that is not covered by an obstacle above the virtual surface.

In an optional embodiment, the flatness of the target area is represented by a feature vector, and the feature vector is obtained according to any two points on the target area; the device further comprises: a determination module 907 and an interaction module 904;

the determining module 907 is configured to determine, when the target area is the gentle area, a placement direction of the virtual prop according to a feature vector of the target area;

the display module 902 is further configured to display the virtual item in a placeable state when the target area is the open area;

the interaction module 904 is configured to receive a placing operation;

the executing module 903 is further configured to place the virtual prop on the target area according to the placement direction when the placement operation is received.

In an optional embodiment, the selecting module 901 is further configured to select the target area on the virtual surface according to a view focus of the master virtual character.

In an optional embodiment, the apparatus further comprises: an acquisition module 905, a determination module 907, and a transmission module 906;

the obtaining module 905 is configured to obtain a position and a shooting direction of a camera, where the camera is a camera corresponding to a viewing angle of the master virtual character;

the emitting module 906 is configured to emit a second ray from the position of the camera along the shooting direction;

the obtaining module 905 is further configured to obtain a target intersection point of the second ray and the virtual surface;

the determining module 907 is configured to determine the target intersection as a first location point;

the determining module 907 is further configured to determine the virtual surface near the first location point as the target area.

In an alternative embodiment, each of the at least two parallel rays is a spherical ray, and the spherical ray is a path that exits a sphere in one direction from a point.

It should be noted that: the virtual surface determining apparatus in the virtual world provided in the above embodiment is only illustrated by the division of the above functional modules, and in practical applications, the function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the virtual surface judgment device in the virtual world provided by the above embodiment and the virtual surface judgment method in the virtual world belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.

Fig. 27 shows a block diagram of a terminal 3900 provided in an exemplary embodiment of the present application. The terminal 3900 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio layer iii, motion video Experts compression standard Audio layer 3), an MP4 player (Moving Picture Experts Group Audio layer IV, motion video Experts compression standard Audio layer 4), a notebook computer, or a desktop computer. The terminal 3900 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, and other names.

Generally, the terminal 3900 includes: a processor 3901 and a memory 3902.

Processor 3901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 3901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 3901 may also include a main processor, which is a processor used to process data in the wake-up state and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 3901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 3901 may also include an AI (Artificial Intelligence) processor to process computational operations related to machine learning.

The memory 3902 may include one or more computer-readable storage media, which may be non-transitory. The memory 3902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 3902 is used to store at least one instruction for execution by the processor 3901 to implement the virtual surface determination method in a virtual world provided by the method embodiments herein.

In some embodiments, the terminal 3900 can also optionally include: a peripheral interface 3903 and at least one peripheral. Processor 3901, memory 3902, and peripheral interface 3903 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 3903 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 3904, touch display screen 3905, camera 3906, audio circuitry 3907, positioning component 3908, and power source 3909.

Peripheral interface 3903 can be used to connect at least one peripheral associated with I/O (Input/Output) to processor 3901 and memory 3902. In some embodiments, processor 3901, memory 3902, and peripheral device interface 3903 are integrated on the same chip or circuit board; in some other embodiments, any one or both of processor 3901, memory 3902, and peripheral device interface 3903 may be implemented on separate chips or circuit boards, which are not limited by the present embodiment.

The Radio Frequency circuit 3904 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 3904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 3904 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuitry 3904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 3904 can communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 3904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.

The display screen 3905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 3905 is a touch display screen, the display screen 3905 also has the ability to acquire touch signals on or over the surface of the display screen 3905. The touch signal may be input to the processor 3901 for processing as a control signal. At this point, the display 3905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 3905 may be one, providing the front panel of the terminal 3900; in other embodiments, the display screens 3905 can be at least two, each disposed on a different surface of the terminal 3900 or in a folded design; in still other embodiments, the display 3905 can be a flexible display disposed on a curved surface or on a folded surface of the terminal 3900. Even further, the display 3905 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 3905 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.

Camera assembly 3906 is used to capture images or video. Optionally, camera assembly 3906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 3906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.

Audio circuitry 3907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 3901 for processing or inputting the electric signals to the radio frequency circuit 3904 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 3900. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 3901 or the radio frequency circuit 3904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 3907 may also include a headphone jack.

The positioning component 3908 is operable to locate a current geographic location of the terminal 3900 to implement navigation or LBS (location based Service). The positioning component 3908 can be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.

Power supply 3909 is used to provide power to the various components in terminal 3900. Power supply 3909 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When power supply 3909 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.

In some embodiments, the terminal 3900 also includes one or more sensors 3910. The one or more sensors 3910 include, but are not limited to: an acceleration sensor 3911, a gyro sensor 3912, a pressure sensor 3913, a fingerprint sensor 3914, an optical sensor 3915, and a proximity sensor 3916.

The acceleration sensor 3911 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal 3900. For example, the acceleration sensor 3911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 3901 may control the touch display screen 3905 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal collected by the acceleration sensor 3911. The acceleration sensor 3911 may also be used for acquisition of motion data of a game or a user.

The gyroscope sensor 3912 may detect a body direction and a rotation angle of the terminal 3900, and the gyroscope sensor 3912 may cooperate with the acceleration sensor 3911 to acquire a 3D motion of the user on the terminal 3900. From the data collected by the gyro sensor 3912, the processor 3901 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.

Pressure sensors 3913 may be disposed on side frames of the terminal 3900 and/or underlying layers of the touch display screen 3905. When the pressure sensor 3913 is disposed on the side frame of the terminal 3900, a user's holding signal of the terminal 3900 can be detected, and the processor 3901 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 3913. When the pressure sensor 3913 is disposed at a lower layer of the touch display screen 3905, the processor 3901 controls the operability controls on the UI interface according to the pressure operation of the user on the touch display screen 3905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.

The fingerprint sensor 3914 is used to collect a fingerprint of the user, and the processor 3901 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 3914, or the fingerprint sensor 3914 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 3901 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 3914 may be disposed on the front, back, or side of the terminal 3900. When a physical key or vendor Logo is provided on the terminal 3900, the fingerprint sensor 3914 may be integrated with the physical key or vendor Logo.

The optical sensor 3915 is used to collect the ambient light intensity. In one embodiment, the processor 3901 may control the display brightness of the touch display screen 3905 based on the intensity of ambient light collected by the optical sensor 3915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 3905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 3905 is turned down. In another embodiment, the processor 3901 may also dynamically adjust the shooting parameters of the camera assembly 3906 based on the intensity of ambient light collected by the optical sensor 3915.

A proximity sensor 3916, also known as a distance sensor, is typically disposed on the front panel of the terminal 3900. The proximity sensor 3916 is used to capture the distance between the user and the front face of the terminal 3900. In one embodiment, the touch display screen 3905 is controlled by the processor 3901 to switch from a bright screen state to a dark screen state when the proximity sensor 3916 detects that the distance between the user and the front face of the terminal 3900 gradually decreases; when the proximity sensor 3916 detects that the distance between the user and the front face of the terminal 3900 gradually becomes larger, the touch display screen 3905 is controlled by the processor 3901 to switch from a breath-screen state to a light-screen state.

Those skilled in the art will appreciate that the architecture shown in fig. 27 does not constitute a limitation of terminal 3900 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.

The present application further provides a computer device, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method for determining a virtual surface in a virtual world provided in any of the above exemplary embodiments.

The present application further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the method for determining a virtual surface in a virtual world provided in any of the above exemplary embodiments.

It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

53页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:消息提示方法、装置、终端及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类