Interaction detection method and device of virtual model, electronic equipment and storage medium

文档序号:159307 发布日期:2021-10-29 浏览:35次 中文

阅读说明:本技术 虚拟模型的交互检测方法、装置、电子设备及存储介质 (Interaction detection method and device of virtual model, electronic equipment and storage medium ) 是由 吴海山 周诗涛 于 2021-07-30 设计创作,主要内容包括:本申请实施例公开了一种虚拟模型的交互检测方法、装置、电子设备及存储介质,在位于虚拟场景中的待贴花区域中渲染出目标贴花时,生成与待贴花区域对应的多个碰撞体,并通过检测若有虚拟角色对应的虚拟模型与碰撞体存在碰撞,则确定虚拟角色位于目标贴花对应的游戏效果区域内。本申请实施例通过在待贴花区域生成多个碰撞体,以形成一贴合目标贴花的检测区域,从而使玩家准确对位于游戏效果区域内的虚拟角色造成伤害或其他技能效果,提高检测的准确性。(The embodiment of the application discloses an interaction detection method and device of a virtual model, electronic equipment and a storage medium. According to the embodiment of the application, the plurality of collision bodies are generated in the area to be subjected to the applique, so that the detection area which is attached to the target applique is formed, a player can accurately cause damage or other skill effects to the virtual character in the game effect area, and the detection accuracy is improved.)

1. A method for detecting interaction of a virtual model, the method comprising:

responding to a game effect triggering instruction, and determining a to-be-applied design area of a virtual model in a virtual scene according to the application design information of a target application design corresponding to the game effect triggering instruction, wherein the to-be-applied design area is configured as an area for generating the target application design on the virtual model;

generating a plurality of collision points corresponding to the to-be-applied area based on the to-be-applied virtual model and the application information, and generating a collision body group according to the collision points, wherein the collision body group is composed of collision bodies generated by the collision points, and the to-be-applied virtual model is a virtual model corresponding to the to-be-applied area;

and when the virtual character is detected to collide with the collision body group, determining that the virtual character is located in the game effect area corresponding to the target applique.

2. The method of interactive inspection of a virtual model according to claim 1, wherein the decal information includes a decal size, a decal anchor point, and a decal height;

the generating a plurality of collision points corresponding to the to-be-applied area based on the to-be-applied virtual model and the application information, and generating a collision body group according to the plurality of collision points includes:

determining a current location of a decal anchor point of the target decal rendered in the area to be decal;

determining a plurality of target endpoints based on a current location of a decal anchor for the target decal, the decal size, and a specified height;

generating a plurality of target rays according to the decal height and the plurality of target end points;

and generating a collision volume group based on the target rays and the virtual model to be subjected to applique.

3. The method of detecting interaction of a virtual model according to claim 2, further comprising, before determining a plurality of target end points based on the current position of the decal anchor point of the target decal, the decal size, and a specified height:

and generating a projection plane parallel to the target applique along the positive direction of a first coordinate axis of a virtual scene coordinate system according to the current position, the applique size and the designated height of the applique anchor point of the target applique, wherein the first coordinate axis is a coordinate axis perpendicular to the target applique.

4. The method according to claim 3, wherein the target end points include a reference point, a first end point, and a second end point;

the determining a plurality of target endpoints based on a current location of a decal anchor point for the target decal, the decal size, and a specified height, comprises:

determining a projection of the target decal's decal anchor point on the projection plane based on the current location of the target decal anchor point;

generating a reference point on the projection plane according to the position of the projection on the projection plane;

generating a plurality of first end points which are arranged at equal intervals based on a first preset distance along the positive direction of a first coordinate axis of a projection plane coordinate system and the negative direction of the first coordinate axis of the projection plane coordinate system by taking the reference point as a starting point, wherein the first end points are all positioned in the projection plane;

and respectively taking the reference point and the first endpoint as starting points, and generating a plurality of second endpoints which are arranged at equal intervals on the basis of a second preset interval along the positive direction of a second coordinate axis of the projection plane coordinate system and the negative direction of the second coordinate axis of the projection plane coordinate system, wherein the second endpoints are all positioned in the projection plane, and the first coordinate axis and the second coordinate axis are in a vertical relation.

5. The method of claim 4, wherein the target ray comprises a reference point ray, a first ray, and a second ray;

the generating a plurality of target rays from the decal height and the plurality of target endpoints includes:

based on the reference point, a plurality of first end points and a plurality of second end points, emitting rays along the negative direction of a first coordinate axis of a virtual scene coordinate system to generate a reference point ray corresponding to the reference point, a first ray corresponding to the first end point and a second ray corresponding to the second end point, wherein the numerical value of the length of the reference point ray, the numerical value of the length of the first ray and the numerical value of the length of the second ray are the numerical values of the decal height;

generating a plurality of colliders based on the reference point ray, the first rays, the second rays and the virtual model to be applied with the decal.

6. The method for detecting interaction of a virtual model according to claim 5, wherein the generating a plurality of colliders based on the reference point ray, the plurality of first rays, the plurality of second rays and the virtual model to be applied with a decal comprises:

determining the intersection points of the reference point rays, the first rays and the second rays and the virtual model to be subjected to applique respectively so as to obtain a plurality of collision points;

and generating a plurality of colliders based on the plurality of collision points and preset collider attribute information, and generating a collider group according to the plurality of colliders.

7. The method of claim 1, wherein the collision volumes are shaped as spheres or cylinders, and two adjacent collision volumes may overlap with each other.

8. The method of claim 1, wherein after determining that the virtual character is located in the game effect area corresponding to the target decal, the method further comprises:

and adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area.

9. The interaction detection method of the virtual model according to claim 8, wherein the attribute information of the virtual character includes a life value, the collision volume is associated with a corresponding adjustment parameter, the adjustment parameter is used to adjust the life value, and the game effect is: deducting the adjustment parameter from the vital value;

the adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area includes:

when collision between a virtual model corresponding to a virtual character and the collision body group is detected, acquiring an intersection point of the virtual model corresponding to the virtual character and a collision body in the collision body group;

determining a target collision body where the intersection point is located, and acquiring target adjustment parameters corresponding to the target collision body;

and deducting the target adjustment parameter from the vital value to obtain an adjusted vital value.

10. The method of claim 9, wherein after subtracting the target adjustment parameter from the vital value to obtain an adjusted vital value, the method further comprises:

determining whether the adjusted vital value is lower than a preset vital value;

if not, adjusting the adjusted life value based on the target adjustment parameter.

11. The interaction detection method of the virtual model according to claim 8, wherein the attribute information of the virtual character includes a life value, and the virtual model corresponding to the virtual character is composed of a plurality of limb models;

the adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area includes:

when collision between a virtual model corresponding to a virtual character and the collision body group is detected, acquiring an intersection point of the virtual model corresponding to the virtual character and a collision body in the collision body group;

determining a target limb model where the intersection points are located;

and adjusting the life value of the virtual character based on the life deduction value corresponding to the target limb model in the game effect of the target applique.

12. The interaction detection method of a virtual model according to claim 8, wherein the attribute information of the virtual character includes a motion state;

the adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area includes:

when collision between a virtual model corresponding to a virtual character and the collision body group is detected, acquiring an intersection point of the virtual model corresponding to the virtual character and a collision body in the collision body group;

determining a target collision body where the intersection point is located, and acquiring a state indication parameter corresponding to the target collision body;

and adjusting the motion state of the virtual character based on the state indication parameters corresponding to the target collision bodies.

13. An interaction detection apparatus for a virtual model, comprising:

the system comprises a first determining unit, a second determining unit and a third determining unit, wherein the first determining unit is used for responding to a game effect triggering instruction and determining a to-be-applied applique area of a virtual model in a virtual scene according to the applique information of a target applique corresponding to the game effect triggering instruction, and the to-be-applied applique area is configured to be an area for generating the target applique on the virtual model;

the system comprises a first generating unit, a second generating unit and a third generating unit, wherein the first generating unit is used for generating a plurality of collision points corresponding to an area to be applied with a design pattern based on a virtual model to be applied with a design pattern and the application information, and generating a collision body group according to the collision points, the collision body group is composed of collision bodies generated by the collision points, and the virtual model to be applied with the design pattern is a virtual model corresponding to the area to be applied with the design pattern;

and the second determination unit is used for determining that the virtual character is positioned in the game effect area corresponding to the target applique when the virtual character is detected to collide with the collider group.

14. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps of the method for interaction detection of a virtual model according to any of claims 1 to 12.

15. A computer-readable storage medium storing instructions for loading by a processor to perform the steps of the method for detecting interaction of a virtual model according to any one of claims 1 to 12.

Technical Field

The application relates to the technical field of game processing, in particular to an interaction detection method and device of a virtual model, electronic equipment and a storage medium.

Background

With the continuous development of computer communication technology, in order to meet the pursuit of mental life of people, entertainment games capable of being operated on terminals, for example, games of the type of multiplayer online action competition developed based on client or server architecture, etc., have been developed. In the action competition game, players can operate the virtual characters in the screen to play the game, and can execute relevant operations such as attacks and the like in a game scene based on the third visual angle of the characters operated by the players, so that the players can experience visual impact brought by the game in an immersive manner, and the initiative and the sense of reality of the game are greatly enhanced.

Currently, in action competition games, a player can control a virtual character to attack the virtual character controlled by other players so as to obtain a final win. For example, a player may manipulate a virtual character to release a skill or throw a virtual prop, creating a skill release area in a virtual scene, to cause injury or other skill effects to virtual characters located within the skill area. In current games, a Decal (decall) technology is generally adopted to form a skill release area in a virtual scene, and a single collision box is generated according to the Decal as a detection area to detect whether a player stands in the skill release area. However, in actual operation, there is an unreasonable phenomenon in which a player represents injury or other skill effects to other player-controlled virtual characters.

Disclosure of Invention

The embodiment of the application provides an interactive detection method and device of a virtual model, electronic equipment and a storage medium, a plurality of collision bodies are generated in an area to be applied with a target applique, so that a detection area which is attached with the target applique is formed, a player can accurately cause injury or other skill effects on a virtual character in a game effect area, and the detection accuracy is improved.

The embodiment of the application provides an interaction detection method of a virtual model, which comprises the following steps:

responding to a game effect triggering instruction, and determining a to-be-applied design area of a virtual model in a virtual scene according to the application design information of a target application design corresponding to the game effect triggering instruction, wherein the to-be-applied design area is configured as an area for generating the target application design on the virtual model;

generating a plurality of collision points corresponding to the to-be-applied area based on the to-be-applied virtual model and the application information, and generating a collision body group according to the collision points, wherein the collision body group is composed of collision bodies generated by the collision points, and the to-be-applied virtual model is a virtual model corresponding to the to-be-applied area;

and when the virtual character is detected to collide with the collision body group, determining that the virtual character is located in the game effect area corresponding to the target applique.

Optionally, the decal information includes a decal size, a decal anchor point, and a decal height;

the generating a plurality of collision points corresponding to the to-be-applied area based on the to-be-applied virtual model and the application information, and generating a collision body group according to the plurality of collision points includes:

determining a current location of a decal anchor point of the target decal rendered in the area to be decal;

determining a plurality of target endpoints based on a current location of a decal anchor for the target decal, the decal size, and a specified height;

generating a plurality of target rays according to the decal height and the plurality of target end points;

and generating a collision volume group based on the target rays and the virtual model to be subjected to applique.

Optionally, before determining a plurality of target endpoints based on the current position of the decal anchor point of the target decal, the decal size, and the specified height, the method further comprises:

and generating a projection plane parallel to the target applique along the positive direction of a first coordinate axis of a virtual scene coordinate system according to the current position, the applique size and the designated height of the applique anchor point of the target applique, wherein the first coordinate axis is a coordinate axis perpendicular to the target applique.

Optionally, the target end points include a reference point, a first end point and a second end point;

the determining a plurality of target endpoints based on a current location of a decal anchor point for the target decal, the decal size, and a specified height, comprises:

determining a projection of the target decal's decal anchor point on the projection plane based on the current location of the target decal anchor point;

generating a reference point on the projection plane according to the position of the projection on the projection plane;

generating a plurality of first end points which are arranged at equal intervals based on a first preset distance along the positive direction of a first coordinate axis of a projection plane coordinate system and the negative direction of the first coordinate axis of the projection plane coordinate system by taking the reference point as a starting point, wherein the first end points are all positioned in the projection plane;

and respectively taking the reference point and the first endpoint as starting points, and generating a plurality of second endpoints which are arranged at equal intervals on the basis of a second preset interval along the positive direction of a second coordinate axis of the projection plane coordinate system and the negative direction of the second coordinate axis of the projection plane coordinate system, wherein the second endpoints are all positioned in the projection plane, and the first coordinate axis and the second coordinate axis are in a vertical relation.

Optionally, the target ray includes a reference point ray, a first ray and a second ray;

the generating a plurality of target rays from the decal height and the plurality of target endpoints includes:

based on the reference point, a plurality of first end points and a plurality of second end points, emitting rays along the negative direction of a first coordinate axis of a virtual scene coordinate system to generate a reference point ray corresponding to the reference point, a first ray corresponding to the first end point and a second ray corresponding to the second end point, wherein the numerical value of the length of the reference point ray, the numerical value of the length of the first ray and the numerical value of the length of the second ray are the numerical values of the decal height;

generating a plurality of colliders based on the reference point ray, the first rays, the second rays and the virtual model to be applied with the decal.

Optionally, the generating a plurality of colliders based on the reference point ray, the first rays, the second rays and the virtual model to be applied with a decal includes:

determining the intersection points of the reference point rays, the first rays and the second rays and the virtual model to be subjected to applique respectively so as to obtain a plurality of collision points;

and generating a plurality of colliders based on the plurality of collision points and preset collider attribute information, and generating a collider group according to the plurality of colliders.

Optionally, the collision bodies are shaped as spheres or cylinders, and two adjacent collision bodies may overlap with each other.

Optionally, after determining that the virtual character is located in the game effect area corresponding to the target applique, the method further includes:

and adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area.

Optionally, the attribute information of the virtual character includes a life value, the collision volume is associated with a corresponding adjustment parameter, the adjustment parameter is used to adjust the life value, and the game effect is as follows: deducting the adjustment parameter from the vital value;

the adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area includes:

when collision between a virtual model corresponding to a virtual character and the collision body group is detected, acquiring an intersection point of the virtual model corresponding to the virtual character and a collision body in the collision body group;

determining a target collision body where the intersection point is located, and acquiring target adjustment parameters corresponding to the target collision body;

and deducting the target adjustment parameter from the vital value to obtain an adjusted vital value.

Optionally, after subtracting the target adjustment parameter from the vital value to obtain an adjusted vital value, the method further includes:

determining whether the adjusted vital value is lower than a preset vital value;

if not, adjusting the adjusted life value based on the target adjustment parameter.

Optionally, the attribute information of the virtual character includes a life value, and a virtual model corresponding to the virtual character is composed of a plurality of limb models;

the adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area includes:

when collision between a virtual model corresponding to a virtual character and the collision body group is detected, acquiring an intersection point of the virtual model corresponding to the virtual character and a collision body in the collision body group;

determining a target limb model where the intersection points are located;

and adjusting the life value of the virtual character based on the life deduction value corresponding to the target limb model in the game effect of the target applique.

Optionally, the attribute information of the virtual character includes a motion state;

the adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area includes:

when collision between a virtual model corresponding to a virtual character and the collision body group is detected, acquiring an intersection point of the virtual model corresponding to the virtual character and a collision body in the collision body group;

determining a target collision body where the intersection point is located, and acquiring a state indication parameter corresponding to the target collision body;

and adjusting the motion state of the virtual character based on the state indication parameters corresponding to the target collision bodies.

Correspondingly, the embodiment of the present application further provides an interaction detection apparatus for a virtual model, where the apparatus includes:

the system comprises a first determining unit, a second determining unit and a third determining unit, wherein the first determining unit is used for responding to a game effect triggering instruction and determining a to-be-applied applique area of a virtual model in a virtual scene according to the applique information of a target applique corresponding to the game effect triggering instruction, and the to-be-applied applique area is configured to be an area for generating the target applique on the virtual model;

the system comprises a first generating unit, a second generating unit and a third generating unit, wherein the first generating unit is used for generating a plurality of collision points corresponding to an area to be applied with a design pattern based on a virtual model to be applied with a design pattern and the application information, and generating a collision body group according to the collision points, the collision body group is composed of collision bodies generated by the collision points, and the virtual model to be applied with the design pattern is a virtual model corresponding to the area to be applied with the design pattern;

and the second determination unit is used for determining that the virtual character is positioned in the game effect area corresponding to the target applique when the virtual character is detected to collide with the collider group.

In some embodiments, the apparatus further comprises:

a third determination unit for determining a current position of a decal anchor point of the target decal rendered in the area to be decal;

further for determining a plurality of target endpoints based on a current location of a decal anchor for the target decal, the decal size, and a specified height;

a second generating unit, configured to generate a plurality of target rays according to the decal height and the plurality of target end points;

and generating a set of collision volumes based on the plurality of target rays and the virtual model to be decalcified.

In some embodiments, the apparatus further comprises:

and the third generation unit is used for generating a projection plane parallel to the target applique along the positive direction of a first coordinate axis of a virtual scene coordinate system according to the current position of the applique anchor point of the target applique, the applique size and the designated height, wherein the first coordinate axis is a coordinate axis vertical to the target applique.

In some embodiments, the apparatus further comprises:

a fourth determination unit configured to determine a projection of the decal anchor of the target decal on the projection plane based on a current position of the decal anchor of the target decal;

a fourth generation unit configured to:

generating a reference point on the projection plane according to the position of the projection on the projection plane;

generating a plurality of first end points which are arranged at equal intervals based on a first preset distance along the positive direction of a first coordinate axis of a projection plane coordinate system and the negative direction of the first coordinate axis of the projection plane coordinate system by taking the reference point as a starting point, wherein the first end points are all positioned in the projection plane;

and respectively taking the reference point and the first endpoint as starting points, and generating a plurality of second endpoints which are arranged at equal intervals on the basis of a second preset interval along the positive direction of a second coordinate axis of the projection plane coordinate system and the negative direction of the second coordinate axis of the projection plane coordinate system, wherein the second endpoints are all positioned in the projection plane, and the first coordinate axis and the second coordinate axis are in a vertical relation.

In some embodiments, the apparatus further comprises:

a fifth generating unit configured to:

based on the reference point, a plurality of first end points and a plurality of second end points, emitting rays along the negative direction of a first coordinate axis of a virtual scene coordinate system to generate a reference point ray corresponding to the reference point, a first ray corresponding to the first end point and a second ray corresponding to the second end point, wherein the numerical value of the length of the reference point ray, the numerical value of the length of the first ray and the numerical value of the length of the second ray are the numerical values of the decal height;

generating a plurality of colliders based on the reference point ray, the first rays, the second rays and the virtual model to be applied with the decal.

In some embodiments, the apparatus further comprises:

a fifth determining unit, configured to determine intersection points of the reference point ray, the first rays, and the second rays with the virtual model to be decaled, respectively, so as to obtain multiple collision points;

a sixth generating unit configured to generate a plurality of collision volumes based on the plurality of collision points and preset collision volume attribute information, and generate a collision volume group from the plurality of collision volumes.

In some embodiments, the apparatus further comprises:

and the first adjusting unit is used for adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area.

In some embodiments, the apparatus further comprises:

a first obtaining unit, configured to obtain an intersection point of a virtual model corresponding to a virtual character and a collider in the collider group when it is detected that the virtual model corresponding to the virtual character collides with the collider group;

a sixth determining unit, configured to determine a target collision volume where the intersection point is located, and obtain a target adjustment parameter corresponding to the target collision volume;

the first processing unit is used for deducting the target adjustment parameter from the vital value to obtain an adjusted vital value.

In some embodiments, the apparatus further comprises:

a seventh determining unit for determining whether the adjusted vital value is lower than a preset vital value;

if not, adjusting the adjusted life value based on the target adjustment parameter.

In some embodiments, the apparatus further comprises:

a second obtaining unit, configured to obtain an intersection point of a virtual model corresponding to a virtual character and a collider in the collider group when it is detected that the virtual model corresponding to the virtual character collides with the collider group;

a seventh determining unit, configured to determine a target limb model where the intersection point is located;

and the second adjusting unit is used for adjusting the life value of the virtual character based on the life deduction value corresponding to the target limb model in the game effect of the target applique.

In some embodiments, the apparatus further comprises:

a third obtaining unit, configured to obtain an intersection point of a virtual model corresponding to a virtual character and a collider in the collider group when it is detected that the virtual model corresponding to the virtual character collides with the collider group;

an eighth determining unit, configured to determine a target collision volume where the intersection point is located, and obtain a state indicating parameter corresponding to the target collision volume;

and the third adjusting unit is used for adjusting the motion state of the virtual character based on the state indicating parameter corresponding to the target collision body.

Accordingly, an electronic device is further provided in an embodiment of the present application, and includes a processor, a memory, and a computer program stored on the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the steps of any one of the interaction detection methods for a virtual model as described above.

Furthermore, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the interaction detection methods of the virtual model described above.

The embodiment of the application provides an interaction detection method and device for a virtual model, electronic equipment and a storage medium, wherein when a target decal is rendered in an area to be decalled in a virtual scene, a plurality of collision bodies corresponding to the area to be decalled are generated, and if a virtual model corresponding to a virtual character collides with the collision bodies, the virtual character is determined to be located in a game effect area corresponding to a target decal. According to the embodiment of the application, the plurality of collision bodies are generated in the area to be subjected to the applique, so that the detection area which is attached to the target applique is formed, a player can accurately cause damage or other skill effects to the virtual character in the game effect area, and the detection accuracy is improved.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a scene schematic diagram of an interaction detection system of a virtual model according to an embodiment of the present application.

Fig. 2 is a schematic flowchart of an interaction detection method for a virtual model according to an embodiment of the present disclosure.

Fig. 3 is an application scenario diagram of an interaction detection method for a virtual model according to an embodiment of the present application.

Fig. 4 is a schematic structural diagram of a projection plane provided in an embodiment of the present application.

Fig. 5 is another structural schematic diagram of a projection plane provided in an embodiment of the present application.

Fig. 6 is a schematic view of another application scenario of the interaction detection method for a virtual model according to the embodiment of the present application

Fig. 7 is a schematic view of another application scenario of the interaction detection method for a virtual model according to the embodiment of the present application

Fig. 8 is a schematic view of another application scenario of the interaction detection method for a virtual model according to the embodiment of the present application

Fig. 9 is a schematic structural diagram of an interaction detection apparatus of a virtual model according to an embodiment of the present application.

Fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The embodiment of the application provides an interaction detection method and device of a virtual model, electronic equipment and a storage medium. Specifically, the interaction detection method of the virtual model in the embodiment of the present application may be executed by an electronic device, where the electronic device may be a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform.

For example, when the interaction detection method of the virtual model is operated on the terminal, the terminal device stores a game application program and is used for presenting a virtual scene in a game screen. The terminal device is used for interacting with a user through a graphical user interface, for example, downloading and installing a game application program through the terminal device and running the game application program. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a game screen and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for executing the game, generating the graphical user interface, responding to the operation instructions, and controlling display of the graphical user interface on the touch display screen.

For example, when the interaction detection method of the virtual model runs on a server, the virtual model can be a cloud game. Cloud gaming refers to a gaming regime based on cloud computing. In the running mode of the cloud game, the running main body of the game application program and the game picture presenting main body are separated, and the storage and the running of the sound processing method are finished on the cloud game server. The game screen presentation is performed at a cloud game client, which is mainly used for receiving and sending game data and presenting the game screen, for example, the cloud game client may be a display device with a data transmission function near a user side, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, and the like, but a terminal device for performing game data processing is a cloud game server at the cloud end. When a game is played, a user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the cloud game client through a network, and finally the data are decoded through the cloud game client and the game pictures are output.

Referring to fig. 1, fig. 1 is a schematic view of a scene of an interaction detection system of a virtual model according to an embodiment of the present disclosure. The system may include at least one terminal, at least one server, at least one database, and a network. The terminal held by the user can be connected to servers of different games through a network. A terminal is any device having computing hardware capable of supporting and executing a software product corresponding to a game. In addition, the terminal has one or more multi-touch sensitive screens for sensing and obtaining input of a user through a touch or slide operation performed at a plurality of points of one or more touch display screens. In addition, when the system includes a plurality of terminals, a plurality of servers, and a plurality of networks, different terminals may be connected to each other through different networks and through different servers. The network may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, different terminals may be connected to other terminals or to a server using their own bluetooth network or hotspot network. For example, multiple users may be online through different terminals to connect and synchronize with each other over a suitable network to support multiplayer gaming. Additionally, the system may include a plurality of databases coupled to different servers and in which information relating to the gaming environment may be stored continuously as different users play the multiplayer game online.

The embodiment of the application provides an interaction detection method of a virtual model, which can be executed by a terminal or a server. The embodiment of the present application is described by taking an interaction detection method of a virtual model as an example, where the interaction detection method is executed by a terminal. The terminal comprises a touch display screen and a processor, wherein the touch display screen is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. When a user operates the graphical user interface through the touch display screen, the graphical user interface can control the local content of the terminal through responding to the received operation instruction, and can also control the content of the opposite-end server through responding to the received operation instruction. For example, the operation instruction generated by the user acting on the graphical user interface comprises an instruction for starting a game application, and the processor is configured to start the game application after receiving the instruction provided by the user for starting the game application. Further, the processor is configured to render and draw a graphical user interface associated with the game on the touch display screen. A touch display screen is a multi-touch sensitive screen capable of sensing a touch or slide operation performed at a plurality of points on the screen at the same time. The user uses a finger to perform touch operation on the graphical user interface, and when the graphical user interface detects the touch operation, different virtual objects in the graphical user interface of the game are controlled to perform actions corresponding to the touch operation. For example, the game may be any one of a leisure game, an action game, a role-playing game, a strategy game, a sports game, a game of chance, and the like. Wherein the game may include a virtual scene of the game drawn on a graphical user interface. Further, one or more virtual objects, such as virtual characters, controlled by the user (or player) may be included in the virtual scene of the game. Additionally, one or more obstacles, such as railings, ravines, walls, etc., may also be included in the virtual scene of the game to limit movement of the virtual objects, e.g., to limit movement of one or more objects to a particular area within the virtual scene. Optionally, the virtual scene of the game also includes one or more elements, such as skills, points, character health, energy, etc., to provide assistance to the player, provide virtual services, increase points related to player performance, etc. In addition, the graphical user interface may also present one or more indicators to provide instructional information to the player. For example, a game may include a player-controlled virtual object and one or more other virtual objects (such as an enemy character). In one embodiment, one or more other virtual objects are controlled by other players of the game. For example, one or more other virtual objects may be computer controlled, such as a robot using Artificial Intelligence (AI) algorithms, to implement a human-machine fight mode. For example, the virtual objects possess various skills or capabilities that the game player uses to achieve the goal. For example, the virtual object possesses one or more weapons, props, tools, etc. that may be used to eliminate other objects from the game. Such skills or capabilities may be activated by a player of the game using one of a plurality of preset touch operations with a touch display screen of the terminal. The processor may be configured to present a corresponding game screen in response to an operation instruction generated by a touch operation of a user.

It should be noted that the scene schematic diagram of the interaction detection system of the virtual model shown in fig. 1 is only an example, and the interaction detection system of the virtual model and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and it is obvious to a person skilled in the art that the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.

In view of the foregoing problems, embodiments of the present application provide a method and an apparatus for detecting interaction of a virtual model, an electronic device, and a storage medium, which are described in detail below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.

The embodiment of the present application provides an interaction detection method for a virtual model, which may be executed by a terminal or a server.

Referring to fig. 2, fig. 2 is a schematic flow chart of an interaction detection method for a virtual model according to an embodiment of the present application, and a specific flow may include the following steps 101 to 103:

and 101, responding to a game effect triggering instruction, and determining a to-be-applied applique area of a virtual model in a virtual scene according to the applique information of a target applique corresponding to the game effect triggering instruction, wherein the to-be-applied applique area is configured as an area for generating the target applique on the virtual model.

In the embodiment of the application, a virtual scene is displayed on the game interface of the terminal, and the virtual scene is a virtual environment displayed (or provided) when an application program runs on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional three-dimensional environment, or a pure fictional three-dimensional environment. The virtual environment is used for virtual environment engagement between at least two virtual characters, and virtual resources available for the at least two virtual characters are arranged in the virtual environment. A virtual scene is displayed in a game interface, one or more virtual roles are displayed in the virtual scene, and the virtual roles can be main control virtual roles, also can be same-formation virtual roles which are in same formation with the main control virtual roles, and also can be enemy formation virtual roles which are in enemy formation with the main control virtual roles, wherein the same-formation virtual roles are virtual roles which are in same formation with the main control virtual roles; the enemy camp virtual role is a virtual role which is not in the same camp as the main control virtual role. In the virtual scene, a master virtual role, a same-battle virtual role, and a hostile battle virtual role may exist at the same time, which are described herein by way of example and not by way of limitation.

A virtual character (or hero) refers to a movable object in a virtual environment. A virtual character refers to a virtual object in a game controlled by a user or a player through a terminal. In the embodiment of the present application, the master virtual character refers to a virtual object in a game controlled by a current user through a terminal, that is, a virtual character controlled by a local user. The virtual role which is in the same battle with the main control virtual role or the enemy battle virtual role which is in the enemy battle with the main control virtual role refers to that: and other users control virtual objects in the game through the terminal, namely virtual characters controlled by other end users.

When the master control virtual character is controlled by the player, the game effect triggering instruction can be generated by triggering the function control displayed in the virtual scene correspondingly displayed by the touch master control virtual character. For example, a throwing control may be displayed in the virtual scene, the throwing control being used to trigger the virtual prop. When a player generates touch operation on the throwing control, a game effect triggering instruction is generated based on the touch operation so as to throw the virtual prop to the target position in the virtual scene, and the applique corresponding to the virtual prop is rendered at the target position in the virtual scene. For another example, a skill control may be displayed in the virtual scene, the skill control being used to trigger a skill effect; when a player generates touch operation on the skill control, a game effect triggering instruction is generated based on the touch operation, and the applique corresponding to the skill effect is rendered at the target position in the virtual scene.

For example, referring to fig. 3, a throwing control may be displayed in the virtual scene, where the throwing control is used to trigger the virtual prop, and the virtual prop may be a "burning bomb". When a player generates touch operation on the throwing control, a game effect triggering instruction is generated based on the touch operation, a burning bomb is thrown to a target position in a virtual scene, an area to be applied with a decal is determined based on the target position and the decal size corresponding to the burning bomb, so that the decal corresponding to the burning bomb clings to a virtual model to be applied with a decal corresponding to the area to be applied with a decal, and the decal corresponding to the burning bomb is rendered at the target position in the virtual scene.

It should be noted that the touch operation in the embodiment of the present application may be a touch operation performed by a player on a game interface through a touch display screen, for example, a touch operation generated by the player clicking or touching on the game interface with a finger. The player may also click on the game interface to generate a touch operation by controlling a mouse button, for example, the player clicks on the game interface to generate a touch operation by pressing a right mouse button.

Decals (decals) are relatively small geometric objects that overlay normal objects in a virtual scene to dynamically change the appearance of the object surface. For example, the pop-up holes, footprints, scratches, cracks, etc., rendered in the virtual scene are all decals. A decal is generally used by a designer, and is projected in a virtual scene in a certain direction by setting a decal as a rectangular area, thereby forming a rectangular solid in a three-dimensional space. The patch of the decal is where the cuboid first intersects the virtual model surface in the projection direction. After extracting the triangles from the intersecting surfaces, these triangles are cut out with four bounding planes of the projected cuboid. The triangle is then given the required decal texture by generating the appropriate vertex texture coordinates. These mapped triangles are rendered on top of the virtual scene, sometimes using a disparity map to bring out the illusion of depth, and a small depth offset (z-bias), usually a little bit moving to the near plane, to avoid depth conflicts (lighting) with the original surface. Finally, the surface modification results such as spring holes or scratch marks can be created.

A virtual model is a model that simulates real things running in software, and is typically designed by a designer using a game engine, and may include virtual scene models, architectural models, game character models, and so forth. The virtual model may include model data such as skin data, physical collision volumes, and skeletal data.

The skin data may include a plurality of vertex data, and each vertex data has attribute information such as a corresponding weight, vertex position information, a normal line, a triangle sequence, texture coordinates, and vertex colors. According to the triangular sequence of each vertex, triangular surfaces can be formed based on the triangular sequence and the corresponding vertex data, and a plurality of triangular surfaces can form a graphic mesh, namely a triangular mesh.

The combined action of the physical collision volumes and the rigid bodies may cause the virtual models to have physical effects, the rigid bodies may cause the virtual models to be controlled and influenced by the physical effects, and the collision volumes may cause the virtual models to collide with each other. The physical collision body may be a spherical collision body, a capsule collision body, a mesh collision body, or the like. In the embodiment of the application, the capsule collision body is adopted, and the capsule collision body is formed by respectively connecting two hemispheroids at two ends of a cylinder, is combined with collision of other irregular shapes for use, and is suitable for being used on a game role model.

And 102, generating a plurality of collision points corresponding to the to-be-applied area based on the to-be-applied virtual model and the application information, and generating a collision body group according to the plurality of collision points, wherein the collision body group is composed of collision bodies generated by the plurality of collision points, and the to-be-applied virtual model is a virtual model corresponding to the to-be-applied area.

In particular, the decal information includes the decal size, the decal anchor point, and the decal height. The terminal can render a target applique in the area to be appliqued based on the virtual model to be appliqued and the applique information; then, a plurality of collision points corresponding to the to-be-applied region are generated based on the to-be-applied virtual model and the application information, and a collision body group is generated according to the plurality of collision points, and the method can include:

determining the current position of an applique anchor point of a target applique rendered in an area to be appliqued;

determining a plurality of target endpoints based on a current location of a decal anchor for a target decal, a decal size, and a specified height;

generating a plurality of target rays according to the decal height and the plurality of target end points;

and generating a collision volume group based on the plurality of target rays and the virtual model to be pasted.

Optionally, before the step of "determining a plurality of target endpoints based on the current location of the decal anchor point for the target decal, the decal size, and the specified height", the method may comprise:

and generating a projection plane parallel to the target applique along the positive direction of a first coordinate axis of a virtual scene coordinate system according to the current position, the applique size and the designated height of the applique anchor point of the target applique, wherein the first coordinate axis is a coordinate axis vertical to the target applique.

Wherein, the first coordinate system is a Z-axis of a world coordinate system of the game virtual Engine, and the game Engine may be an Unreal Engine (UE). The world coordinate system is also called as a measurement coordinate system, and is a three-dimensional rectangular coordinate system (xw, yw, zw), the spatial positions of the camera and the object to be measured can be described in the world coordinate system, and the position of the world coordinate system can be determined according to the actual situation.

Alternatively, the specified height may be determined based on the height of the decal, and in general, the specified height may be one-half of the height of the decal if the decal anchor point for the decal is centered on the decal. If there is a special requirement, for example, to increase the fault tolerance of the hit player, the position of a certain distance can be adjusted to be smaller, so that the generation position of the detection box is lower than the height of the applique, and the adjustment can be continued based on half of the height of the applique, so as to achieve the actual requirement.

In one embodiment, the target endpoint includes a reference point, a first endpoint, and a second endpoint. The step "determine a plurality of target endpoints based on the current location of the decal anchor point for the target decal, the decal size, and the specified height," the method may comprise:

determining the projection of the decal anchor point of the target decal on the projection plane based on the current position of the decal anchor point of the target decal;

generating a reference point on the projection plane according to the position projected on the projection plane;

generating a plurality of first end points which are arranged at equal intervals by taking the reference point as a starting point based on a first preset distance along the positive direction of a first coordinate axis of a projection plane coordinate system and the negative direction of the first coordinate axis of the projection plane coordinate system, wherein the first end points are all positioned in a projection plane;

and respectively taking the reference point and the first end point as starting points, and generating a plurality of second end points which are arranged at equal intervals on the basis of a second preset distance along the positive direction of a second coordinate axis of the projection plane coordinate system and the negative direction of the second coordinate axis of the projection plane coordinate system, wherein the second end points are all positioned in the projection plane, and the first coordinate axis and the second coordinate axis are in a vertical relation.

The projection plane coordinate system is a chartlet coordinate system generated based on chartlets, the first coordinate axis can be a Z axis of the chartlet coordinate system, the second coordinate system can be a Y axis of the chartlet coordinate system, and the Z axis of the chartlet coordinate system and the Y axis of the chartlet coordinate system are in a vertical relation.

For example, referring to fig. 4 and 5 together, a projection plane is looked at in a Z-axis direction of a world coordinate system, a projection of a decal anchor point of a target decal on the projection plane is determined based on a current position of the decal anchor point of the target decal, a reference point is generated on the projection plane according to the position projected on the projection plane, and then a first endpoint is generated at intervals of a first preset distance along a positive direction of a first coordinate axis of the projection plane coordinate system and a negative direction of the first coordinate axis of the projection plane coordinate system with the reference point as a starting point, and the first endpoint does not exceed the projection plane. And respectively taking the reference point and the first end point as starting points, and generating a second end point at intervals of a second preset interval along the positive direction of a second coordinate axis of the projection plane coordinate system and the negative direction of the second coordinate axis of the projection plane coordinate system, wherein the second end point does not exceed the projection plane.

Specifically, the object rays include a reference point ray, a first ray, and a second ray. To generate the plurality of target rays, the step "generate the plurality of target rays based on the decal height and the plurality of target endpoints", the method may comprise:

based on the reference point, the first end points and the second end points, emitting rays along the negative direction of the first coordinate axis of the virtual scene coordinate system to generate reference point rays corresponding to the reference point, first rays corresponding to the first end points and second rays corresponding to the second end points, wherein the numerical value of the length of the reference point rays, the numerical value of the length of the first rays and the numerical value of the length of the second rays are numerical values of the decal height;

a plurality of collision volumes is generated based on the fiducial ray, the first plurality of rays, the second plurality of rays, and the virtual model to be decaled.

It should be noted that the Decal (Decal) is essentially a test box with three-dimensional variables that control the Scale value of the Decal, and in the embodiment of the present application, the value of the test box on the X-axis of the map coordinate system can be obtained as the height of the Decal.

In a specific embodiment, the step of "generating a plurality of colliders based on the reference point ray, the plurality of first rays, the plurality of second rays, and the virtual model to be decalcified" may include:

determining the intersection points of the reference point rays, the first rays and the second rays and the virtual model to be subjected to applique respectively so as to obtain a plurality of collision points;

and generating a plurality of colliders based on the plurality of collision points and preset collider attribute information, and generating a collider group according to the plurality of colliders.

Alternatively, the collision bodies are shaped as spheres or cylinders, and two adjacent collision bodies may overlap with each other.

For example, referring to fig. 6, after determining the intersection points of the reference point ray, the first rays and the second rays with the virtual model to be applied, the terminal obtains a plurality of collision points. Then generating a plurality of collision bodies based on the plurality of collision points and preset collision body attribute information, and generating a collision body group according to the plurality of collision bodies; and a plurality of collision bodies are distributed in the area to be applique so as to realize the detection of other virtual models.

It should be noted that the shape and size of the collider (also referred to as a collision detecting box) can be adjusted according to the shape of the region to be applied with a decal to be detected, so as to meet different shapes of monitoring regions. For example, a spherical collision body that has been generated may be adjusted to a cylindrical collision body.

And 103, when the virtual character is detected to collide with the collision body group, determining that the virtual character is positioned in the game effect area corresponding to the target applique.

In order to enrich the game playing method and make the game more interesting, after the step of determining that the virtual character is positioned in the game effect area corresponding to the target applique, the method can comprise the following steps:

and adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area.

In one embodiment, the attribute information of the virtual character includes a life value, the collision volume is associated with a corresponding adjustment parameter, the adjustment parameter is used to adjust the life value, and the game effect is as follows: deducting the adjustment parameter from the vital value. The step of "adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area" may include:

when collision between a virtual model corresponding to a virtual character and the collision body group is detected, acquiring an intersection point of the virtual model corresponding to the virtual character and a collision body in the collision body group;

determining a target collision body where the intersection point is located, and acquiring target adjustment parameters corresponding to the target collision body;

and deducting the target adjustment parameter from the vital value to obtain an adjusted vital value.

In order to enhance the diversity of game play, after the step of subtracting the target adjustment parameter from the life value to obtain an adjusted life value, the method may include:

determining whether the adjusted vital value is lower than a preset vital value;

if not, adjusting the adjusted life value based on the target adjustment parameter.

Specifically, after the target adjustment parameter is deducted from the life value to obtain an adjusted life value, whether the adjusted life parameter is lower than a preset life value or not can be determined, and if yes, the current life value corresponding to the virtual character is not adjusted; if not, adjusting the adjusted vital value based on the target adjustment parameter until the adjusted vital parameter is lower than a preset vital value. Through the embodiment, the game effect corresponding to the applique continuously acts on the virtual character in the game effect area, so that the game experience of the user can be enhanced.

In another specific embodiment, the attribute information of the virtual character includes a life value, and the virtual model corresponding to the virtual character is composed of a plurality of limb models. The step of "adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area" may include:

when collision between a virtual model corresponding to a virtual character and the collision body group is detected, acquiring an intersection point of the virtual model corresponding to the virtual character and a collision body in the collision body group;

determining a target limb model where the intersection points are located;

and adjusting the life value of the virtual character based on the life deduction value corresponding to the target limb model in the game effect of the target applique.

Optionally, the diversity of game playing methods is realized, and the attribute information of the virtual character includes a motion state. The step of "adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area" may include:

when collision between a virtual model corresponding to a virtual character and the collision body group is detected, acquiring an intersection point of the virtual model corresponding to the virtual character and a collision body in the collision body group;

determining a target collision body where the intersection point is located, and acquiring a state indication parameter corresponding to the target collision body;

and adjusting the motion state of the virtual character based on the state indication parameters corresponding to the target collision bodies.

The embodiment of the application provides an interaction detection method for a virtual model, which is characterized in that when a target decal area in a virtual scene is rendered, a plurality of collision bodies corresponding to the target decal area are generated, and if a virtual model corresponding to a virtual character collides with the collision bodies, the virtual character is determined to be located in a game effect area corresponding to a target decal. According to the embodiment of the application, the plurality of collision bodies are generated in the area to be subjected to the applique, so that the detection area which is attached to the target applique is formed, a player can accurately cause damage or other skill effects to the virtual character in the game effect area, and the detection accuracy is improved.

Based on the above description, the interaction detection method of the virtual model of the present application will be further described below by way of example. Referring to fig. 7 and 8, an embodiment of the scenario is as follows:

(1) when a player logs in a game through electronic equipment (a computer end), the player can control a main control virtual character to enter a certain game play through the electronic equipment, the game play also comprises a plurality of virtual characters except the main control virtual character, the plurality of virtual characters are divided into different avatars in the game, and one avatars can be formed by four virtual characters. Wherein, three virtual roles and the main control virtual role are same marketing virtual roles, and other virtual roles and the main control virtual role are enemy marketing virtual roles.

(2) A throwing control may be displayed in the virtual scene, where the throwing control is used to trigger the virtual prop, and the virtual prop may be a "burning bomb" at this time. When a player operates a mouse to click the throwing control element to generate touch operation on the throwing control element, the electronic equipment can generate a game effect triggering instruction based on the mouse click operation triggered by the user, respond to the game effect triggering instruction, throw a 'burning bomb' to a target position in the virtual scene, and determine an area to be applied with a decal based on the target position and the decal size corresponding to the 'burning bomb' so that the decal corresponding to the 'burning bomb' is attached to the virtual model to be applied with the decal corresponding to the area to be applied with the decal, so that the decal corresponding to the 'burning bomb' is rendered at the target position in the virtual scene.

(3) The electronic equipment detects whether the virtual character is located in the game effect area corresponding to the target applique in real time. And when the virtual model corresponding to the enemy virtual character is detected to collide with the collision body, determining that the enemy virtual character is located in the game effect area corresponding to the target applique. And acquiring the intersection point of the virtual model corresponding to the enemy virtual character and the collision body. After the target collision volume where the intersection point is located is determined, the target adjustment parameter corresponding to the target collision volume is obtained, and at this time, the target adjustment parameter is 20. And deducting the target adjustment parameter from the life value to obtain an adjusted life value, namely subtracting the target adjustment parameter 20 from the initial life value 100 of the enemy virtual character to obtain an adjusted life value 80.

In order to better implement the interaction detection method of the virtual model provided in the embodiments of the present application, the embodiments of the present application further provide an interaction detection apparatus based on the virtual model. The terms are the same as those in the above-mentioned interaction detection method of the virtual model, and details of implementation may refer to the description in the method embodiment.

Referring to fig. 9, fig. 9 is a block diagram of an interaction detection apparatus for a virtual model according to an embodiment of the present disclosure, where the apparatus includes:

a first determining unit 201, configured to, in response to a game effect triggering instruction, determine, according to decal information of a target decal corresponding to the game effect triggering instruction, a to-be-decal area of a virtual model in a virtual scene, where the to-be-decal area is configured as an area on the virtual model where the target decal is generated;

a first generating unit 202, configured to generate, based on a virtual model to be applied with a decal and the decal information, multiple collision points corresponding to the area to be applied with a decal, and generate a collision volume group according to the multiple collision points, where the collision volume group is composed of collision volumes generated by the multiple collision points, and the virtual model to be applied with a decal is a virtual model corresponding to the area to be applied with a decal;

a second determining unit 203, configured to determine that the virtual character is located in the game effect area corresponding to the target decal when it is detected that there is a collision between the virtual character and the set of collision volumes.

In some embodiments, the apparatus further comprises:

a third determination unit for determining a current position of a decal anchor point of the target decal rendered in the area to be decal;

further for determining a plurality of target endpoints based on a current location of a decal anchor for the target decal, the decal size, and a specified height;

a second generating unit, configured to generate a plurality of target rays according to the decal height and the plurality of target end points;

and generating a set of collision volumes based on the plurality of target rays and the virtual model to be decalcified.

In some embodiments, the apparatus further comprises:

and the third generation unit is used for generating a projection plane parallel to the target applique along the positive direction of a first coordinate axis of a virtual scene coordinate system according to the current position of the applique anchor point of the target applique, the applique size and the designated height, wherein the first coordinate axis is a coordinate axis vertical to the target applique.

In some embodiments, the apparatus further comprises:

a fourth determination unit configured to determine a projection of the decal anchor of the target decal on the projection plane based on a current position of the decal anchor of the target decal;

a fourth generation unit configured to:

generating a reference point on the projection plane according to the position of the projection on the projection plane;

generating a plurality of first end points which are arranged at equal intervals based on a first preset distance along the positive direction of a first coordinate axis of a projection plane coordinate system and the negative direction of the first coordinate axis of the projection plane coordinate system by taking the reference point as a starting point, wherein the first end points are all positioned in the projection plane;

and respectively taking the reference point and the first endpoint as starting points, and generating a plurality of second endpoints which are arranged at equal intervals on the basis of a second preset interval along the positive direction of a second coordinate axis of the projection plane coordinate system and the negative direction of the second coordinate axis of the projection plane coordinate system, wherein the second endpoints are all positioned in the projection plane, and the first coordinate axis and the second coordinate axis are in a vertical relation.

In some embodiments, the apparatus further comprises:

a fifth generating unit configured to:

based on the reference point, a plurality of first end points and a plurality of second end points, emitting rays along the negative direction of a first coordinate axis of a virtual scene coordinate system to generate a reference point ray corresponding to the reference point, a first ray corresponding to the first end point and a second ray corresponding to the second end point, wherein the numerical value of the length of the reference point ray, the numerical value of the length of the first ray and the numerical value of the length of the second ray are the numerical values of the decal height;

generating a plurality of colliders based on the reference point ray, the first rays, the second rays and the virtual model to be applied with the decal.

In some embodiments, the apparatus further comprises:

a fifth determining unit, configured to determine intersection points of the reference point ray, the first rays, and the second rays with the virtual model to be decaled, respectively, so as to obtain multiple collision points;

a sixth generating unit configured to generate a plurality of collision volumes based on the plurality of collision points and preset collision volume attribute information, and generate a collision volume group from the plurality of collision volumes.

In some embodiments, the apparatus further comprises:

and the first adjusting unit is used for adjusting the attribute information of the virtual character based on the game effect corresponding to the game effect area.

In some embodiments, the apparatus further comprises:

a first obtaining unit, configured to obtain an intersection point of a virtual model corresponding to a virtual character and a collider in the collider group when it is detected that the virtual model corresponding to the virtual character collides with the collider group;

a sixth determining unit, configured to determine a target collision volume where the intersection point is located, and obtain a target adjustment parameter corresponding to the target collision volume;

the first processing unit is used for deducting the target adjustment parameter from the vital value to obtain an adjusted vital value.

In some embodiments, the apparatus further comprises:

a seventh determining unit for determining whether the adjusted vital value is lower than a preset vital value;

if not, adjusting the adjusted life value based on the target adjustment parameter.

In some embodiments, the apparatus further comprises:

a second obtaining unit, configured to obtain an intersection point of a virtual model corresponding to a virtual character and a collider in the collider group when it is detected that the virtual model corresponding to the virtual character collides with the collider group;

a seventh determining unit, configured to determine a target limb model where the intersection point is located;

and the second adjusting unit is used for adjusting the life value of the virtual character based on the life deduction value corresponding to the target limb model in the game effect of the target applique.

In some embodiments, the apparatus further comprises:

a third obtaining unit, configured to obtain an intersection point of a virtual model corresponding to a virtual character and a collider in the collider group when it is detected that the virtual model corresponding to the virtual character collides with the collider group;

an eighth determining unit, configured to determine a target collision volume where the intersection point is located, and obtain a state indicating parameter corresponding to the target collision volume;

and the third adjusting unit is used for adjusting the motion state of the virtual character based on the state indicating parameter corresponding to the target collision body.

The embodiment of the application provides an interaction detection device for a virtual model, wherein by responding to a game effect trigger instruction, a first determination unit 201 determines a to-be-applied area of the virtual model in a virtual scene according to the application information of a target application corresponding to the game effect trigger instruction, wherein the to-be-applied area is configured as an area for generating the target application on the virtual model; the first generation unit 202 generates a plurality of collision points corresponding to the to-be-applied area based on a to-be-applied virtual model and the application information, and generates a collision volume group according to the plurality of collision points, wherein the collision volume group is composed of collision volumes generated by the plurality of collision points, and the to-be-applied virtual model is a virtual model corresponding to the to-be-applied area; when it is detected that there is a collision between the virtual character and the set of colliders, the second determining unit 203 determines that the virtual character is located within the game effect area corresponding to the target decal. According to the embodiment of the application, the plurality of collision bodies are generated in the area to be subjected to the applique, so that the detection area which is attached to the target applique is formed, a player can accurately cause damage or other skill effects to the virtual character in the game effect area, and the detection accuracy is improved.

Correspondingly, the embodiment of the present application further provides an electronic device, where the electronic device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. As shown in fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 300 includes a processor 301 having one or more processing cores, a memory 302 having one or more computer-readable storage media, and a computer program stored on the memory 302 and executable on the processor. The processor 301 is electrically connected to the memory 302. Those skilled in the art will appreciate that the electronic device configurations shown in the figures do not constitute limitations of the electronic device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.

The processor 301 is a control center of the electronic device 300, connects various parts of the whole electronic device 300 by using various interfaces and lines, performs various functions of the electronic device 300 and processes data by running or loading software programs and/or modules stored in the memory 302, and calling data stored in the memory 302, thereby monitoring the electronic device 300 as a whole.

In this embodiment of the application, the processor 301 in the electronic device 300 loads instructions corresponding to processes of one or more application programs into the memory 302, and the processor 301 executes the application programs stored in the memory 302 according to the following steps, so as to implement various functions:

responding to a game effect triggering instruction, and determining a to-be-applied design area of a virtual model in a virtual scene according to the application design information of a target application design corresponding to the game effect triggering instruction, wherein the to-be-applied design area is configured as an area for generating the target application design on the virtual model;

generating a plurality of collision points corresponding to the to-be-applied area based on the to-be-applied virtual model and the application information, and generating a collision body group according to the collision points, wherein the collision body group is composed of collision bodies generated by the collision points, and the to-be-applied virtual model is a virtual model corresponding to the to-be-applied area;

and when the virtual character is detected to collide with the collision body group, determining that the virtual character is located in the game effect area corresponding to the target applique.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Optionally, as shown in fig. 10, the electronic device 300 further includes: a touch display 303, a radio frequency circuit 304, an audio circuit 305, an input unit 306, and a power source 307. The processor 301 is electrically connected to the touch display 303, the radio frequency circuit 304, the audio circuit 305, the input unit 306, and the power source 307. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 10 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

The touch display screen 303 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 303 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 301, and can receive and execute commands sent by the processor 301. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 301 to determine the type of the touch event, and then the processor 301 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 303 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 303 may also be used as a part of the input unit 306 to implement an input function.

In the present embodiment, a graphical user interface is generated on the touch-sensitive display screen 303 by the processor 301 executing a game application. The touch display screen 303 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.

The rf circuit 304 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices through wireless communication, and for transceiving signals with the network device or other electronic devices.

The audio circuit 305 may be used to provide an audio interface between the user and the electronic device through a speaker, microphone. The audio circuit 305 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 305 and converted into audio data, which is then processed by the audio data output processor 301 and then transmitted to, for example, another electronic device via the radio frequency circuit 304, or the audio data is output to the memory 302 for further processing. The audio circuit 305 may also include an earbud jack to provide communication of a peripheral headset with the electronic device.

The input unit 306 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.

The power supply 307 is used to power the various components of the electronic device 300. Optionally, the power supply 307 may be logically connected to the processor 301 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. Power supply 307 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.

Although not shown in fig. 10, the electronic device 300 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.

In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

As can be seen from the above, in the electronic device provided in this embodiment, in response to a game effect trigger instruction, an area to be applied with a decal of a virtual model in a virtual scene is determined according to decal information of a target decal corresponding to the game effect trigger instruction, where the area to be applied with the decal is configured as an area on the virtual model where the target decal is generated; generating a plurality of collision points corresponding to the to-be-applied area based on the to-be-applied virtual model and the application information, and generating a collision body group according to the collision points, wherein the collision body group is composed of collision bodies generated by the collision points, and the to-be-applied virtual model is a virtual model corresponding to the to-be-applied area; and when the virtual character is detected to collide with the collision body group, determining that the virtual character is located in the game effect area corresponding to the target applique. According to the embodiment of the application, the plurality of collision bodies are generated in the area to be subjected to the applique, so that the detection area which is attached to the target applique is formed, a player can accurately cause damage or other skill effects to the virtual character in the game effect area, and the detection accuracy is improved.

It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.

To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute the steps in any of the methods for locking a virtual character provided by the embodiments of the present application. For example, the computer program may perform the steps of:

responding to a game effect triggering instruction, and determining a to-be-applied design area of a virtual model in a virtual scene according to the application design information of a target application design corresponding to the game effect triggering instruction, wherein the to-be-applied design area is configured as an area for generating the target application design on the virtual model;

generating a plurality of collision points corresponding to the to-be-applied area based on the to-be-applied virtual model and the application information, and generating a collision body group according to the collision points, wherein the collision body group is composed of collision bodies generated by the collision points, and the to-be-applied virtual model is a virtual model corresponding to the to-be-applied area;

and when the virtual character is detected to collide with the collision body group, determining that the virtual character is located in the game effect area corresponding to the target applique.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.

Since the computer program stored in the storage medium can execute the steps in any virtual model interaction detection method provided in the embodiments of the present application, beneficial effects that can be achieved by any virtual model interaction detection method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.

In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

The interaction detection method, the interaction detection device, the electronic device and the storage medium of the virtual model provided by the embodiment of the application are introduced in detail, a specific example is applied in the description to explain the principle and the implementation mode of the application, and the description of the embodiment is only used for helping to understand the technical scheme and the core idea of the application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种游戏道具的选取方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类