Method and device for displaying picture, storage medium and electronic device

文档序号:1104658 发布日期:2020-09-29 浏览:12次 中文

阅读说明:本技术 画面的显示方法和装置、存储介质、电子装置 (Method and device for displaying picture, storage medium and electronic device ) 是由 胡梓楠 汪成峰 于 2020-04-30 设计创作,主要内容包括:本申请公开了一种画面的显示方法和装置、存储介质、电子装置。其中,该方法包括:获取目标对象的第一配置信息,其中,第一配置信息与目标对象的第一运动状态匹配,目标对象为虚拟场景中的可移动对象;根据第一配置信息对目标模型的物理属性进行配置,其中,目标模型用于在虚拟场景中渲染出目标对象;利用配置后的目标模型在物理引擎中模拟出目标对象的第一运动姿态,其中,第一运动姿态为目标对象处于第一运动状态时的运动姿态;显示与目标对象的第一运动姿态匹配的运动画面。本申请解决了相关技术中游戏动画的准确度较低的技术问题。(The application discloses a picture display method and device, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring first configuration information of a target object, wherein the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene; configuring physical attributes of a target model according to first configuration information, wherein the target model is used for rendering a target object in a virtual scene; simulating a first motion posture of the target object in a physical engine by using the configured target model, wherein the first motion posture is a motion posture of the target object in a first motion state; and displaying the moving picture matched with the first moving posture of the target object. The method and the device solve the technical problem that the accuracy of the game animation is low in the related technology.)

1. A method for displaying a screen, comprising:

acquiring first configuration information of a target object, wherein the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene;

configuring physical attributes of a target model according to the first configuration information, wherein the target model is used for rendering the target object in the virtual scene;

simulating a first motion posture of the target object in a physical engine by using the configured target model, wherein the first motion posture is a motion posture of the target object in the first motion state;

and displaying the moving picture matched with the first moving posture of the target object.

2. The method of claim 1, wherein obtaining first configuration information for a target object comprises:

obtaining a plurality of controllers, wherein each controller in the plurality of controllers is obtained by encapsulating one configuration information in a plurality of configuration information, each configuration information in the plurality of configuration information is matched with one motion state, and the motion states matched with any two configuration information are different;

and finding a first controller from the plurality of controllers according to the first motion state, wherein the first controller is obtained by packaging the first configuration information.

3. The method of claim 2, wherein prior to obtaining a plurality of controllers, the method further comprises creating each controller of the plurality of controllers as follows:

acquiring an attribute value of a physical attribute representing a target motion state among a plurality of motion states, wherein the target motion state is matched with a target controller to be created among the plurality of controllers;

and encapsulating the acquired attribute values of the physical attributes as target configuration information into the target controller, wherein the plurality of configuration information comprise the target configuration information.

4. The method of claim 1, wherein after displaying the moving picture that matches the first motion pose of the target object, the method further comprises:

under the condition that the motion state of the target object is changed from the first motion state to a second motion state, acquiring second configuration information matched with the second motion state;

configuring the physical attribute of the target model according to the second configuration information;

simulating a second motion posture of the target object in the physical engine by using the configured target model, wherein the second motion posture is a motion posture of the target object in the second motion state;

and displaying the moving picture matched with the second moving posture of the target object.

5. The method of claim 1, wherein prior to obtaining the first configuration information of the target object, the method further comprises:

creating a plurality of animation sets, wherein each animation set in the plurality of animation sets corresponds to one of a plurality of motion states, the motion states corresponding to any two animation sets in the plurality of animation sets are different, and one animation in each animation set corresponds to one gesture in the corresponding motion state.

6. The method according to any one of claims 1 to 5, wherein before obtaining the first configuration information of the target object, the method further comprises:

creating the target model, wherein the target model comprises a plurality of joints, each joint corresponding to a first part and a second part, the first part being a part that can appear in the moving picture and is affected by a third part, the second part being a part that cannot appear in the moving picture and is used to affect a fourth part, the third part being different from the first part, the second part, and the fourth part, the fourth part being a non-rigid body;

configuring a plurality of controllers for the first component, wherein each controller in the plurality of controllers corresponds to one of a plurality of motion states, and the motion states corresponding to any two controllers in the plurality of controllers are different.

7. The method of claim 6, wherein after configuring the physical properties of the target model according to the first configuration information, the method further comprises:

generating an animation indicating that the first component has a posture change in a case where the rigid body attribute of the first component changes from a rigid body to a non-rigid body configured by the first configuration information,

the position in the process of posture change is pos ═ (1- α) × poslast+α*posanimThe rotation angle during the occurrence of the attitude change is

Figure FDA0002476864430000031

8. A screen display device, comprising:

the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring first configuration information of a target object, the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene;

a first configuration unit, configured to configure physical properties of a target model according to the first configuration information, where the target model is used to render the target object in the virtual scene;

the simulation unit is used for simulating a first motion posture of the target object in a physical engine by using the configured target model, wherein the first motion posture is a motion posture of the target object in the first motion state;

and the display unit is used for displaying the moving picture matched with the first moving posture of the target object.

9. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 7.

10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 7 by means of the computer program.

Technical Field

The present application relates to the field of internet, and in particular, to a method and an apparatus for displaying a screen, a storage medium, and an electronic apparatus.

Background

With the development of computer hardware, graphic technology and other technologies, the vivid visual world is gradually entering people's lives, the fantasy situation presented by people is attracting people's attention, the establishment of a scene view system and the promotion and development of the scene view system become the popular field of the current technology, and games are a typical representative of the technology.

With the progress of the era, games are being shown to users in an increasingly real and wide world from pictures formed by stacking simple color blocks to fine characters formed by millions of polygons, and mobile phone games are a new industry along with the popularization of smart phones. Various mobile phone application developers are windy and cloudy, and mobile phone games are also in the coming of a lot. The application of the mobile game is generally small, the logic is simple, and the action is generally presented according to a fixed template (for example, the posture pictures of walking, running and the like are fixed under any condition), so that the action posture of the character object in the game animation has a larger error compared with the posture in the real world.

In view of the above problems, no effective solution has been proposed.

Disclosure of Invention

The embodiment of the application provides a picture display method and device, a storage medium and an electronic device, and aims to at least solve the technical problem that game animation in the related art is low in accuracy.

According to an aspect of an embodiment of the present application, there is provided a method for displaying a screen, including: acquiring first configuration information of a target object, wherein the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene; configuring physical attributes of a target model according to first configuration information, wherein the target model is used for rendering a target object in a virtual scene; simulating a first motion posture of the target object in a physical engine by using the configured target model, wherein the first motion posture is a motion posture of the target object in a first motion state; and displaying the moving picture matched with the first moving posture of the target object.

According to another aspect of the embodiments of the present application, there is also provided a display device of a screen, including: the device comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring first configuration information of a target object, the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene; the system comprises a first configuration unit, a second configuration unit and a third configuration unit, wherein the first configuration unit is used for configuring physical attributes of a target model according to first configuration information, and the target model is used for rendering a target object in a virtual scene; the simulation unit is used for simulating a first motion posture of the target object in the physical engine by using the configured target model, wherein the first motion posture is the motion posture of the target object in a first motion state; and the display unit is used for displaying the moving picture matched with the first moving posture of the target object.

According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program which, when executed, performs the above-described method.

According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above method through the computer program.

In the embodiment of the application, different motion states of the target object in the virtual scene may have different physical states, and the physical states are controlled by the physical attributes, so that the physical states of the target object in the different motion states can be adjusted by the physical attributes, and then an animation similar to the real world is presented in a rendered picture, so that the technical problem of low accuracy of game animation in the related art can be solved, and the technical effect of improving the accuracy of the game animation can be achieved.

Drawings

The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:

fig. 1 is a schematic diagram of a hardware environment of a display method of a screen according to an embodiment of the present application;

FIG. 2 is a flow chart of an alternative method for displaying a frame according to an embodiment of the present application;

FIG. 3 is a flow chart of an alternative method for displaying a frame according to an embodiment of the present application;

FIG. 4 is a schematic diagram of an alternative game model according to an embodiment of the present application;

FIG. 5 is a schematic view of an alternative game character according to an embodiment of the present application;

FIG. 6 is a schematic diagram of an alternative rigid body property according to an embodiment of the present application;

FIG. 7 is a flow diagram of an alternative screen rendering according to an embodiment of the present application;

FIG. 8 is a schematic illustration of an alternative game data flow according to an embodiment of the present application;

FIG. 9 is a schematic diagram of a display device for displaying an alternative screen according to an embodiment of the present application;

and

fig. 10 is a block diagram of a terminal according to an embodiment of the present application.

Detailed Description

In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

First, partial nouns or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:

physx: the pronunciation is the same as Physics, is a set of open source physical operation engine for simulating physical effect, and is also one of three physical operation engines in the world, and the other two are Havok and Bullet.

Bone animation: the skeleton is composed of a series of joints with a hierarchical relationship, and is a tree structure, and the skeleton animation generates animation for the model by changing the orientation and the position of the skeleton.

Compared with the gesture in the real world, the action gesture of the character object in the game animation has larger error, which is mainly reflected in that the animation can only show the effect already made by art makers, can not show the interaction with objects in the scene, and can not show some effects shown by the character through logic operation, for example, the transition from running animation to static leisure animation, if the running animation made by the art makers is switched with the static leisure animation, the transition physical effect that the hair and the skirt hem float forward when the character stops can not be shown.

In order to overcome the above problems, according to an aspect of embodiments of the present application, a method embodiment of a display method of a screen is provided.

Alternatively, in the present embodiment, the above-described screen display method may be applied to a hardware environment constituted by the game engine 101 and the physics engine 103 as shown in fig. 1. As shown in fig. 1, the physics engine 103 is connected to the game engine 101 through a bus, which may be used to provide the game engine 101 with services for mechanical simulation after collision between objects and scenes, and a database 105 (which may be on-chip storage, registers, memory, etc.) may be provided on the physics engine 103 or independent from the physics engine 103 for providing data storage services for the physics engine 103, where the bus includes but is not limited to: on-chip bus, inter-chip bus; the physics engine 103 may be a separate physical operation processor ppu (paralleling and protecting unit), the game engine 101 may be a graphics processor GPU (graphics Processing unit), both of them may be integrated into the same processor, for example, both of them are integrated into the GPU, and the game engine 101 and the physics engine 103 may also be modules in a central Processing unit cpu (central Processing unit).

The method for displaying the picture in the embodiment of the application can be executed by a CPU (central processing unit) and a GPU (graphics processing unit), and can also be executed by the CPU, the GPU, a game engine and a physical engine together. Fig. 2 is a flowchart of an alternative screen display method according to an embodiment of the present application, and as shown in fig. 2, the method may include the following steps:

step S202, first configuration information of a target object is obtained, where the first configuration information matches a first motion state of the target object, and the target object is a movable object in a virtual scene, such as a character controlled by a player in a game and a non-player-controlled character (e.g., a pet of the player, a monster, etc.).

The motion state is a state of the target object in a virtual scene of the game, such as running, walking, still, and the like.

Step S204, configuring physical attributes of a target model according to the first configuration information, wherein the target model is used for rendering a target object in a virtual scene.

The object model is a model created by a game in a certain proportion of a game character (i.e., the object) and is data to be rendered by being applied to a game engine.

For realistically representing the game screen in such a state, the attribute values of the corresponding physical attributes in different types of running states may be configured in advance so as to be configured by using the attribute values matched with the running states in a certain state. For example, through testing of each attribute value of the physical attribute, the most suitable attribute value (that is, the attribute value when the same posture as the real world can be realistically displayed) in each motion state is determined, so as to obtain the configuration information in the motion state.

Step S206, a first motion posture of the target object is simulated in the physical engine by using the configured target model, wherein the first motion posture is the motion posture of the target object in the first motion state.

The physical engine mainly comprises mechanical simulation after collision between objects, between the objects and scenes in the game world and mechanical simulation of skeleton movement of the objects after collision.

The simulation of the game engine is generally divided into two parts, one for updating game data and one for rendering the game. The physical engine performs simulation by using the updated data of the game engine during simulation, and the game engine performs rendering by using the simulation data of the physical engine. The first operation posture is simulation data of the physical engine, and after the simulation data is input into the game engine, the game engine can use the data to render and obtain a corresponding game picture.

In step S208, a moving picture matched with the first motion posture of the target object is displayed.

For example, a skeletal structure of the target object may be generated and added to the corresponding skin and affect all vertices therein, with the skin corresponding to and associated with the skeletal structure, and the motion of the target object in the gaming application may be controlled using skeletal animation. In the skeleton animation, the target object has a skeleton structure composed of interconnected "bones", and the motion of the target object can be generated by changing the orientation and position (i.e., the motion posture, such as the first motion posture described above) of the bones. After covering, all the bones in the skeleton structure correspondingly influence all the vertexes of the skin, so that the skeleton animation performs motion with more real physical effect under a specific external image.

Through the steps, different physical states of the target object in different motion states in the virtual scene can be controlled through the physical attributes, so that the physical states of the target object in different motion states can be adjusted through the physical attributes, an animation similar to a real world is presented in a rendered picture, the technical problem of low accuracy of game animation in the related technology can be solved, and the technical effect of improving the accuracy of the game animation is achieved. The technical solution of the present application is further detailed below with reference to the steps shown in fig. 2.

In the technical solution provided in step S202, first configuration information of a target object is obtained, where the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene.

Alternatively, before the first configuration information of the target object is obtained, a target model may be created in advance, where the target model includes a plurality of joints, each joint corresponding to a first component and a second component (where a component may be understood as a component of the model, such as a rigid body), the first component being a component that can be visualized in a moving picture and is affected by a third component, that is, a component that needs to be physically simulated, the second component being a component that cannot be visualized in a moving picture and is used for affecting a fourth component (where a fourth component is a non-rigid body), that is, the second component being a component that can affect the non-rigid body, and the third component being a component other than the first component, the second component, and the fourth component.

After the target model is created, a plurality of controllers are configured for the first component, each controller in the plurality of controllers corresponds to one of a plurality of motion states, the motion states corresponding to any two controllers in the plurality of controllers are different, and each controller is further configured with respect to an association relationship between the second component and the first component, such as a spring damping coefficient, for achieving an effect of pulling the first component toward the second component.

In the above scheme, each of the plurality of controllers may be created as follows: acquiring attribute values of physical attributes, such as density, dynamic friction coefficient, static friction coefficient, reduction coefficient and the like, for representing a target motion state in a plurality of motion states, wherein the target motion state is matched with a target controller to be created in a plurality of controllers; and packaging the acquired attribute value of the physical attribute as target configuration information into a target controller, wherein the plurality of configuration information comprise the target configuration information.

Alternatively, the effect of the second component on the non-rigid body (e.g. ankle to leg, head to hair, various joints on the body to clothing) can be simulated in the physics engine, and the simulation results can be animated in various motion states: a plurality of animation sets are created, each animation set in the plurality of animation sets corresponds to one of a plurality of motion states, the motion states corresponding to any two animation sets in the plurality of animation sets are different, and one animation in each animation set corresponds to one gesture in the corresponding motion state.

By adopting the scheme, in the process of rendering, corresponding animation combination can be matched for the non-rigid body in any motion state, and then the posture is selected from the corresponding animation set; optionally, during the rendering process, the second component may also be transferred into the physics engine, and the physics engine may simulate the corresponding non-rigid posture in real time.

In the above embodiment, obtaining the first configuration information of the target object may be implemented as follows: acquiring a plurality of controllers, wherein each controller in the plurality of controllers is obtained by packaging one configuration information in a plurality of configuration information, each configuration information in the plurality of configuration information is matched with one motion state, and the motion states matched with any two configuration information are different; and searching a first controller from the plurality of controllers according to the first motion state, and further obtaining first configuration information, wherein the first controller is obtained by packaging the first configuration information.

In the technical solution provided in step S204, the physical attributes of a target model are configured according to the first configuration information, and the target model is used for rendering a target object in a virtual scene.

For target objects, in different motion states, the attribute values of the same attribute may be different, as may the collision related parameters when colliding with another object in the scene while walking and running. Therefore, in order to more realistically simulate the situation in the real world, the arrangement may be performed according to the arrangement information of different motion states acquired in advance.

Optionally, after the physical attributes of the target model are configured according to the first configuration information, when the rigid body attribute of the first component changes from the rigid body to the non-rigid body configured by the first configuration information, an animation indicating that the first component has a posture change at a position pos ═ 1- α ═ pos during the posture change is generatedlast+α*posanimThe rotation angle during the occurrence of the attitude change isWherein, poslastTo switch the position of the first part before the controller, posanimThe position of the first part, α, calculated from the second part is a coefficient, rot, whose value lies between 0 and 1lastFor switching the angle of rotation, rot, of the first part before the controlleranimTheta is rot for calculating the rotation angle of the first member from the second memberlastAnd rotanimThe included angle of (a).

In the above scheme, in order to avoid abrupt change of the picture when rigid body attribute switching is performed, a linear interpolation method is adopted, and the method is implemented by changing the value of α (for example, 0 in the case of a first frame, 0.1 in the case of a second frame, until the value is increased to 1).

In the technical solution provided in step S206, a first motion posture of the target object is simulated in the physical engine by using the configured target model, where the first motion posture is a motion posture of the target object in a first motion state. The physical engine acquires game updating data of the game engine, and the first motion posture after the target object can be simulated in the physical engine by combining the previous posture.

In the technical solution provided in step S208, a moving picture matching the first moving posture of the target object is displayed.

In an optional embodiment, the displayed frame may be pre-made, and when the moving frame matching with the first motion posture of the target object is displayed, the first animation set matching with the first motion state may be searched from the plurality of animation sets; and searching for the animation matched with the first motion gesture in the first animation set, and displaying the motion picture generated based on the animation matched with the first motion gesture.

In another alternative embodiment, the displayed frame may also be rendered in real time, and after the motion pose of the target object is determined by using the physics engine simulation, the motion pose may be used to render through the GPU, so as to obtain a motion frame. In this scheme, rather than finding the best matching animation state from multiple animation sets, the results of the physics engine calculations may be obtained. The animation to be played can be determined, so that the playing effect is better because the animation is used as input data of the physical engine, and the physical engine can calculate the posture of each bone in combination with other physical states of the character, and the posture is more consistent with the physical effect than that of the animation only.

For example, a character has a long skirt, and when the character runs and stops suddenly, the running and standing actions can be switched only by means of animation, but if physical animation is used, the expression of the skirt during stopping can better accord with the physical effect in reality, the effect is related to the movement speed of the character, and the expression of the effect can be calculated by using a physical engine without inquiring by using a plurality of animations which are made in advance, so that the calculation result can be obtained, and the calculation result is used for rendering.

Alternatively, after displaying the moving picture matching the first moving posture of the target object, if the moving state of the target object changes, the following processing may be performed: under the condition that the motion state of the target object is changed from the first motion state to the second motion state, acquiring second configuration information matched with the second motion state; configuring the physical attribute of the target model according to the second configuration information; simulating a second motion posture of the target object in the physical engine by using the target model, wherein the second motion posture is the motion posture of the target object in a second motion state; and displaying the moving picture matched with the second moving posture of the target object. The specific implementation manner of this embodiment is similar to the steps shown in fig. 2, and is not described again.

According to the technical scheme, part of physical state parameters (such as walking, sitting in a car and climbing) are packaged into controllers, each controller is an independent file, a user configures a plurality of sets of controllers in advance, the physical state parameters of all rigid bodies of a role are changed integrally by switching the controllers during game running so as to adapt to different types of animations, then the physical data and the animation data are fused, effects (namely details are the same as the real world) wanted by art makers can be achieved, and interaction with a scene physical world can be shown. As an alternative example, the technical solution of the present application is further described in detail in a specific implementation manner with reference to fig. 3.

Step S302, configuring a set of rigid body data corresponding to the skeleton hierarchical structure of the skeleton animation, wherein 1 skeleton consists of N (N is a natural number larger than 1) joints, each joint corresponds to 1 rigid body, each rigid body comprises physical attributes (such as density, dynamic friction coefficient, static friction coefficient, reduction coefficient and the like, which can be provided by a physical engine such as Physx) of the rigid body and M (M is a natural number larger than or equal to 0) rigid body graphs forming the joint, and the rigid body graphs consist of cubes, capsules, balls and the like provided by the physical engine.

The model is fitted as much as possible while the skeleton is configured to simulate a more realistic physical effect. This portion of the rigid body is affected by the physical world of the scene, and the subsequent steps describe how it is affected.

Step S304, adding 1 invisible rigid body (such as a frame body on a character body in the figure 5) to each joint (such as a point shown in the figure 4 represents the joint), wherein each rigid body has 0 rigid body graph and has Kinematic attributes (the attribute means that the rigid body does not show physical effects but can affect other non-Kinematic rigid bodies), the posture (such as position and rotation) of each joint is calculated by a skeleton animation module and then is transmitted into a physical system, the result calculated by the skeleton animation module is the result which is expected to be shown by art creation, but the final result has the result which is expected by art workers and also has real feedback interacting with scenes, and the step is to transmit the data of the art creation into the physical system.

In step S306, after going through steps S302 and S304, each Joint has two corresponding rigid bodies, and as shown in fig. 6, constraints (e.g., D6Joint in physics engine PhysX) can be added to the two rigid bodies, where stiffness and damping are configured, and linear and angular represent stiffness and damping for displacement and rotation, respectively. Physx can realize the effect of pulling the A rigid body to the B rigid body based on the model of the spring damping system, and the stiffness and damming which are set are parameters which need to be transmitted, and different numerical values have different expression effects. In the present application, the rigid body of step S302 can be pulled toward the rigid body of step S304, which is accomplished by this constraint.

Step S308, when the rigid body in step S302 is influenced by other rigid bodies in the scene, the corresponding physical effect is shown, for example, the rigid body is hit by the boss; the rigid body of step S302 is also influenced by the rigid body of step S304, and is pulled in a desired direction of the animation, thereby expressing the effect of the animation. In the application, parameters are transmitted into Physx in steps S302-S306, the Physx calculates the posture (such as position and direction) of the rigid body in step S302, the posture of the rigid body in step S302 is used for rendering the model instead of the calculation result of the skeleton animation, which is equivalent to replacing animation data with the result of physical calculation, but the animation data is used by adopting the methods in steps S304 and S306 in the process of physical calculation, so that the fusion of animation and physical effect is achieved.

In step S310, different actions of the character may come out of different physical states in the game, for example, during walking, sitting, climbing, the rigid body configured in step S302 and the constraint configured in step S306, under which some rigid bodies need to be modified. For the convenience of game developers, the present application proposes physical attributes in the following box in fig. 6, where the first 2 attributes are used in step S302, the last 4 attributes are used in step S306, each physical state is separately configured with a controller file, and the rigid body attributes in step S302 and the constraint attributes in step S306 can be switched integrally by switching controllers in a game, thereby reducing the development difficulty.

Step S312, when the rigid body attribute is switched in step S310, if the rigid body is switched from non-Kinematic to Kinematic, the state of the next frame is changed from the physical simulation state to the full animation state, and such a change may bring a sudden change in the display effect, so that when the controller is switched in step S310, the present application performs a smoothing process on the postures of the joints corresponding to the rigid bodies in the non-Kinematic to Kinematic portion by using a linear interpolation method:

pos=(1-α)*poslast+α*poSanim

Figure BDA0002476864440000111

wherein pos is the position of the joint corresponding to the rigid body after smoothing, poslastTo change the position of the joint before the controller, posanimThe position of the joint is calculated for animation α is a linear interpolation coefficient that changes from 0 to 1 at a constant rate within 0.2 seconds (the value can be modified according to the effect), and the transition state ends when the α changes to 1, which is completely driven by animation.

rot is the rotation of the joint corresponding to the rigid body after smoothing. rot (Rot)lastFor rotation of the joint before switching the controller, rotanimCalculate the rotation of the joint for animation, θ is rotlastAnd rotanimα is a spherical linear interpolation coefficient, which changes from 0 to 1 at a constant speed within 0.2 seconds, and the transition state ends when the α changes to 1, and the part of the joint is completely driven by animation.

And then running the updating flow of each frame of the system. The specific process is shown in FIG. 7:

step S702, updating the animation data.

Step S704 sets the rigid body data required in step S304.

In step S706, PhysX calculates the rigid body posture in step S302.

Step S708, the rigid body posture of step S302 is obtained, and rendering is performed instead of the original animation data.

Referring to fig. 8, the relationship among the animation data, the rigid body of the character and other rigid bodies in the scene is packaged as a controller, fused by a PhysX physical engine to obtain the rigid body posture, and then rendered by combining with the rendering data.

In the technical scheme of the application, the controller is used for packaging the physical state, and transition is performed between non-kinetic and kinetic when the controller is switched, so that fusion of bone animation data and physical data can be realized. The scheme can reduce the work load of art, if one animation is made into a plurality of animations (a plurality of sets of animations are made, the interaction between the animation and the scene physical world is controlled by code logic, the effect of interaction with other objects in the scene can be approximately shown, but the role logic is too complex and the development cost is high), and the corresponding effect can be obtained by adopting one animation and combining the real-time calculation result of a physical engine; has better display effect. The game scene and the player are fully interactive, many things are unpredictable, if the physical interaction condition is considered in advance, the game scene and the player are not realistic and can be simplified, and finally the effect is not perfect; the workload of game developers is reduced, the plurality of controllers are provided for switching the physical states, the game developers are divided into the plurality of physical states according to the actions of the roles, and each controller file is configured. When the physical state of the role changes, only the controller file needs to be switched, the physical attribute of each rigid body does not need to be modified independently, and the transition effect when the two physical states are switched does not need to be processed additionally.

It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.

Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.

Compared with the gesture in the real world, the action gesture of the character object in the game animation has larger error, which is mainly reflected in that the animation can only show the effect already made by art makers, can not show the interaction with objects in the scene, and can not show some effects shown by the character through logic operation, for example, the transition from running animation to static leisure animation, if the running animation made by the art makers is switched with the static leisure animation, the transition physical effect that the hair and the skirt hem float forward when the character stops can not be shown.

In order to overcome the above problem, according to another aspect of the embodiments of the present application, there is also provided a display apparatus for a screen for implementing the display method for a screen described above. Fig. 9 is a schematic diagram of a display device of an alternative screen according to an embodiment of the present application, and as shown in fig. 9, the device may include:

a first obtaining unit 901, configured to obtain first configuration information of a target object, where the first configuration information matches a first motion state of the target object, and the target object is a movable object in a virtual scene.

The motion state is a state of the target object in a virtual scene of the game, such as running, walking, still, and the like.

A first configuration unit 903, configured to configure physical properties of a target model according to the first configuration information, where the target model is used to render a target object in a virtual scene.

The object model is a model created by a game in a certain proportion of a game character (i.e., the object) and is data to be rendered by being applied to a game engine.

For realistically representing the game screen in such a state, the attribute values of the corresponding physical attributes in different types of running states may be configured in advance so as to be configured by using the attribute values matched with the running states in a certain state. For example, through testing of each attribute value of the physical attribute, the most suitable attribute value (that is, the attribute value when the same posture as the real world can be realistically displayed) in each motion state is determined, so as to obtain the configuration information in the motion state.

The simulation unit 905 is configured to simulate a first motion posture of the target object in the physical engine by using the configured target model, where the first motion posture is a motion posture of the target object in a first motion state.

The physical engine mainly comprises mechanical simulation after collision between objects, between the objects and scenes in the game world and mechanical simulation of skeleton movement of the objects after collision.

The simulation of the game engine is generally divided into two parts, one for updating game data and one for rendering the game. The physical engine performs simulation by using the updated data of the game engine during simulation, and the game engine performs rendering by using the simulation data of the physical engine. The first operation posture is simulation data of the physical engine, and after the simulation data is input into the game engine, the game engine can use the data to render and obtain a corresponding game picture.

A display unit 907 for displaying a moving picture matched with the first moving posture of the target object.

For example, a skeletal structure of the target object may be generated and added to the corresponding skin and affect all vertices therein, with the skin corresponding to and associated with the skeletal structure, and the motion of the target object in the gaming application may be controlled using skeletal animation. In the skeleton animation, the target object has a skeleton structure composed of interconnected "bones", and the motion of the target object can be generated by changing the orientation and position (i.e., the motion posture, such as the first motion posture described above) of the bones. After covering, all the bones in the skeleton structure correspondingly influence all the vertexes of the skin, so that the skeleton animation performs motion with more real physical effect under a specific external image.

It should be noted that the first obtaining unit 901 in this embodiment may be configured to execute step S202 in this embodiment, the first configuring unit 903 in this embodiment may be configured to execute step S204 in this embodiment, the simulating unit 905 in this embodiment may be configured to execute step S206 in this embodiment, and the display unit 907 in this embodiment may be configured to execute step S208 in this embodiment.

It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.

Through the modules, different physical states of the target object in different motion states in the virtual scene can occur, the physical states are controlled through the physical attributes, so that the physical states of the target object in different motion states can be adjusted through the physical attributes, animation similar to the real world is presented in a rendered picture, the technical problem of low accuracy of game animation in the related technology can be solved, and the technical effect of improving the accuracy of the game animation is achieved.

Optionally, the first obtaining unit includes: the system comprises an acquisition module, a processing module and a control module, wherein the acquisition module is used for acquiring a plurality of controllers, each controller in the plurality of controllers is obtained by packaging one piece of configuration information in a plurality of pieces of configuration information, each piece of configuration information in the plurality of pieces of configuration information is matched with one motion state, and the motion states matched with any two pieces of configuration information are different; the first searching module is used for searching the first controller from the plurality of controllers according to the first motion state, wherein the first controller is obtained by packaging the first configuration information.

Optionally, the apparatus further comprises: a second acquisition unit configured to acquire, before acquiring the plurality of controllers, an attribute value representing a physical attribute of a target moving state of the plurality of moving states, wherein the target moving state matches a target controller to be created of the plurality of controllers; and the packaging unit is used for packaging the acquired attribute values of the physical attributes as target configuration information into a target controller, wherein the plurality of pieces of configuration information comprise the target configuration information.

Optionally, the first obtaining unit is further configured to, after displaying the moving picture matching the first moving posture of the target object, obtain, when the moving state of the target object changes from the first moving state to the second moving state, second configuration information matching the second moving state; the first configuration unit is further used for configuring the physical attributes of the target model according to the second configuration information; the simulation unit is further used for simulating a second motion posture of the target object in the physical engine by using the configured target model, wherein the second motion posture is a motion posture of the target object in a second motion state; and the display unit is also used for displaying the moving picture matched with the second moving posture of the target object.

Optionally, the apparatus further comprises: the first creating unit is used for creating a plurality of animation sets before the first configuration information of the target object is obtained, wherein each animation set in the plurality of animation sets corresponds to one of a plurality of motion states, the motion states corresponding to any two animation sets in the plurality of animation sets are different, and one animation in each animation set corresponds to one gesture in the corresponding motion state.

Optionally, the display unit comprises: the second searching module is used for searching a first animation set matched with the first motion state from the plurality of animation sets; and the display module is used for searching the animation matched with the first motion posture in the first animation set and displaying the motion picture generated based on the animation matched with the first motion posture.

Optionally, the apparatus further comprises: a second creating unit configured to create a target model before acquiring first configuration information of a target object, wherein the target model includes a plurality of joints, each joint corresponding to a first component and a second component, the first component being a component that is expressible in a moving picture and is affected by a third component, the second component being a component that is not expressible in the moving picture and is used to affect a fourth component, the third component being different from the first component, the second component, and the fourth component, the fourth component being a non-rigid body; and a second configuration unit, configured to configure a plurality of controllers for the first component, wherein each controller in the plurality of controllers corresponds to one of the plurality of motion states, and the motion states corresponding to any two controllers in the plurality of controllers are different.

Optionally, the apparatus further comprises a generation unit configured to generate an animation indicating that the first component has a posture change at a position pos ═ 1- α ═ pos during the posture change, in a case where the rigid body attribute of the first component is changed from the rigid body to the non-rigid body configured by the first configuration information after the physical attribute of the target model is configured according to the first configuration informationlast+α*posanimThe rotation angle during the occurrence of the attitude change is

Figure BDA0002476864440000161

Wherein, poslastTo switch the position of the first part before the controller, posanimThe position of the first part, α, calculated from the second part is a coefficient, rot, whose value lies between 0 and 1lastFor switching the angle of rotation, rot, of the first part before the controlleranimTheta is rot for calculating the rotation angle of the first member from the second memberlastAnd rotanimThe included angle of (a).

It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.

In the technical scheme of the application, the controller is used for packaging the physical state, and transition is performed between non-kinetic and kinetic when the controller is switched, so that fusion of bone animation data and physical data can be realized. The scheme can reduce the work load of art, if one animation is made into a plurality of animations for the purpose of adapting to the physical interaction of scenes, the corresponding effect can be obtained by adopting one animation and combining the real-time calculation result of a physical engine; has better display effect. The game scene and the player are fully interactive, many things are unpredictable, if the physical interaction condition is considered in advance, the game scene and the player are not realistic and can be simplified, and finally the effect is not perfect; the workload of game developers is reduced, the plurality of controllers are provided for switching the physical states, the game developers are divided into the plurality of physical states according to the actions of the roles, and each controller file is configured. When the physical state of the role changes, only the controller file needs to be switched, the physical attribute of each rigid body does not need to be modified independently, and the transition effect when the two physical states are switched does not need to be processed additionally.

According to another aspect of the embodiment of the application, a server or a terminal for implementing the display method of the screen is also provided.

Fig. 10 is a block diagram of a terminal according to an embodiment of the present application, and as shown in fig. 10, the terminal may include: one or more processors 1001 (only one of which is shown in fig. 10), memory 1003, and a transmission apparatus 1005, the terminal may further include an input-output device 1007, as shown in fig. 10.

The memory 1003 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for displaying a picture in the embodiment of the present application, and the processor 1001 executes various functional applications and data processing by running the software programs and modules stored in the memory 1003, that is, implements the above-described method for displaying a picture. The memory 1003 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1003 may further include memory located remotely from the processor 1001, which may be connected to a terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The transmitting device 1005 is used for receiving or transmitting data via a network, and can also be used for data transmission between a processor and a memory. Examples of the network may include a wired network and a wireless network. In one example, the transmitting device 1005 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmitting device 1005 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.

Among them, the memory 1003 is used to store an application program, in particular.

The processor 1001 may call an application stored in the memory 1003 via the transmitting device 1005 to perform the following steps:

acquiring first configuration information of a target object, wherein the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene;

configuring physical attributes of a target model according to first configuration information, wherein the target model is used for rendering a target object in a virtual scene;

simulating a first motion posture of the target object in a physical engine by using the configured target model, wherein the first motion posture is a motion posture of the target object in a first motion state;

and displaying the moving picture matched with the first moving posture of the target object.

The processor 1001 is further configured to perform the following steps:

creating a target model, wherein the target model comprises a plurality of joints, each joint corresponds to a first part and a second part, the first part is a part which can appear in a moving picture and is influenced by a third part, the second part is a part which cannot appear in the moving picture and is used for influencing a fourth part, the third part is different from the first part, the second part and the fourth part, and the fourth part is a non-rigid body;

configuring a plurality of controllers for the first component, wherein each controller in the plurality of controllers corresponds to one of a plurality of motion states, and the motion states corresponding to any two controllers in the plurality of controllers are different.

By adopting the embodiment of the application, the method comprises the steps of obtaining first configuration information of a target object, wherein the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene; configuring physical attributes of a target model according to first configuration information, wherein the target model is used for rendering a target object in a virtual scene; simulating a first motion posture of the target object in a physical engine by using the configured target model, wherein the first motion posture is a motion posture of the target object in a first motion state; a scheme of displaying a moving picture matched with the first moving posture of the target object. Different motion states of the target object in the virtual scene may have different physical states, and the physical states are controlled by the physical attributes, so that the physical states of the target object in the different motion states can be adjusted by the physical attributes, and then an animation similar to the real world is presented in a rendered picture, thereby solving the technical problem of low accuracy of game animation in the related art and further achieving the technical effect of improving the accuracy of the game animation.

Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.

It will be understood by those skilled in the art that the structure shown in fig. 10 is merely an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 10 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.

Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.

Embodiments of the present application also provide a storage medium. Alternatively, in the present embodiment, the storage medium may be a program code for executing a display method of a screen.

Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.

Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:

acquiring first configuration information of a target object, wherein the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene;

configuring physical attributes of a target model according to first configuration information, wherein the target model is used for rendering a target object in a virtual scene;

simulating a first motion posture of the target object in a physical engine by using the configured target model, wherein the first motion posture is a motion posture of the target object in a first motion state;

and displaying the moving picture matched with the first moving posture of the target object.

Optionally, the storage medium is further arranged to store program code for performing the steps of:

creating a target model, wherein the target model comprises a plurality of joints, each joint corresponds to a first part and a second part, the first part is a part which can appear in a moving picture and is influenced by a third part, the second part is a part which cannot appear in the moving picture and is used for influencing a fourth part, the third part is different from the first part, the second part and the fourth part, and the fourth part is a non-rigid body;

configuring a plurality of controllers for the first component, wherein each controller in the plurality of controllers corresponds to one of a plurality of motion states, and the motion states corresponding to any two controllers in the plurality of controllers are different.

Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.

Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.

The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.

The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.

In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Embodiments of the present invention also include these and other aspects as specified in the following numbered clauses:

1. a method for displaying a picture, comprising:

acquiring first configuration information of a target object, wherein the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene;

configuring physical attributes of a target model according to the first configuration information, wherein the target model is used for rendering the target object in the virtual scene;

simulating a first motion posture of the target object in a physical engine by using the configured target model, wherein the first motion posture is a motion posture of the target object in the first motion state;

and displaying the moving picture matched with the first moving posture of the target object.

2. The method of clause 1, wherein obtaining first configuration information for the target object comprises:

obtaining a plurality of controllers, wherein each controller in the plurality of controllers is obtained by encapsulating one configuration information in a plurality of configuration information, each configuration information in the plurality of configuration information is matched with one motion state, and the motion states matched with any two configuration information are different;

and finding a first controller from the plurality of controllers according to the first motion state, wherein the first controller is obtained by packaging the first configuration information.

3. The method of clause 2, wherein prior to obtaining the plurality of controllers, the method further comprises creating each controller of the plurality of controllers as follows:

acquiring an attribute value of a physical attribute representing a target motion state among a plurality of motion states, wherein the target motion state is matched with a target controller to be created among the plurality of controllers;

and encapsulating the acquired attribute values of the physical attributes as target configuration information into the target controller, wherein the plurality of configuration information comprise the target configuration information.

4. The method of clause 1, wherein after displaying the moving picture that matches the first motion pose of the target object, the method further comprises:

under the condition that the motion state of the target object is changed from the first motion state to a second motion state, acquiring second configuration information matched with the second motion state;

configuring the physical attribute of the target model according to the second configuration information;

simulating a second motion posture of the target object in the physical engine by using the configured target model, wherein the second motion posture is a motion posture of the target object in the second motion state;

and displaying the moving picture matched with the second moving posture of the target object.

5. The method of clause 1, wherein prior to obtaining the first configuration information of the target object, the method further comprises:

creating a plurality of animation sets, wherein each animation set in the plurality of animation sets corresponds to one of a plurality of motion states, the motion states corresponding to any two animation sets in the plurality of animation sets are different, and one animation in each animation set corresponds to one gesture in the corresponding motion state.

6. The method of clause 5, wherein displaying the moving picture that matches the first motion pose of the target object comprises:

searching a first animation set matched with the first motion state from the plurality of animation sets;

and searching for the animation matched with the first motion gesture in the first animation set, and displaying the motion picture generated based on the animation matched with the first motion gesture.

7. The method of any of clauses 1 to 6, wherein prior to obtaining the first configuration information of the target object, the method further comprises:

creating the target model, wherein the target model comprises a plurality of joints, each joint corresponding to a first part and a second part, the first part being a part that can appear in the moving picture and is affected by a third part, the second part being a part that cannot appear in the moving picture and is used to affect a fourth part, the third part being different from the first part, the second part, and the fourth part, the fourth part being a non-rigid body;

configuring a plurality of controllers for the first component, wherein each controller in the plurality of controllers corresponds to one of a plurality of motion states, and the motion states corresponding to any two controllers in the plurality of controllers are different.

8. The method of clause 7, wherein after configuring the physical attributes of the target model according to the first configuration information, the method further comprises:

generating an animation indicating that the first component has a posture change in a case where the rigid body attribute of the first component changes from a rigid body to a non-rigid body configured by the first configuration information,

the position in the process of posture change is pos ═ (1- α) × poslast+α*posanimThe rotation angle during the occurrence of the attitude change is

Figure BDA0002476864440000241

Wherein, poslastPos for switching the position of said first part before the controlleranimThe position of the first part, α, calculated from the second part, is a coefficient whose value lies between 0 and 1, rotlastFor the angle of rotation, rot, of said first part before switching the controlleranimTheta is rot for calculating the rotation angle of the first member from the second memberlastAnd rotanimThe included angle of (a).

9. A display device of a screen, comprising:

the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring first configuration information of a target object, the first configuration information is matched with a first motion state of the target object, and the target object is a movable object in a virtual scene;

a first configuration unit, configured to configure physical properties of a target model according to the first configuration information, where the target model is used to render the target object in the virtual scene;

the simulation unit is used for simulating a first motion posture of the target object in a physical engine by using the configured target model, wherein the first motion posture is a motion posture of the target object in the first motion state;

and the display unit is used for displaying the moving picture matched with the first moving posture of the target object.

10. The apparatus according to clause 9, wherein the first obtaining unit includes:

the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a plurality of controllers, each controller in the plurality of controllers is obtained by packaging one piece of configuration information in a plurality of pieces of configuration information, each piece of configuration information in the plurality of pieces of configuration information is matched with one motion state, and the motion states matched with any two pieces of configuration information are different;

a first searching module, configured to search for a first controller from the multiple controllers according to the first motion state, where the first controller is obtained by encapsulating the first configuration information.

11. The apparatus of clause 10, wherein the apparatus further comprises:

a second acquisition unit configured to acquire, before acquiring a plurality of controllers, an attribute value representing a physical attribute of a target motion state among a plurality of motion states, wherein the target motion state matches a target controller to be created among the plurality of controllers;

and the packaging unit is used for packaging the acquired attribute values of the physical attributes as target configuration information into the target controller, wherein the plurality of pieces of configuration information comprise the target configuration information.

12. The apparatus of clause 9, wherein,

the first obtaining unit is further configured to obtain, after displaying a moving picture matching a first moving posture of the target object, second configuration information matching a second moving state when the moving state of the target object changes from the first moving state to the second moving state;

the first configuration unit is further configured to configure the physical attribute of the target model according to the second configuration information;

the simulation unit is further configured to simulate a second motion posture of the target object in the physics engine by using the configured target model, where the second motion posture is a motion posture of the target object in the second motion state;

the display unit is further used for displaying the moving picture matched with the second moving posture of the target object.

13. The apparatus of clause 9, wherein the apparatus further comprises:

the first creating unit is used for creating a plurality of animation sets before first configuration information of the target object is acquired, wherein each animation set in the animation sets corresponds to one of a plurality of motion states, the motion states corresponding to any two animation sets in the animation sets are different, and one animation in each animation set corresponds to one gesture in the corresponding motion state.

14. The apparatus of clause 13, wherein the display unit comprises:

the second searching module is used for searching a first animation set matched with the first motion state from a plurality of animation sets;

and the display module is used for searching the animation matched with the first motion posture in the first animation set and displaying the motion picture generated based on the animation matched with the first motion posture.

15. The apparatus of any of clauses 9 to 14, wherein the apparatus further comprises:

a second creating unit configured to create the target model before acquiring first configuration information of a target object, wherein the target model includes a plurality of joints, each joint corresponds to a first component and a second component, the first component is a component that can appear in the moving picture and is affected by a third component, the second component is a component that cannot appear in the moving picture and is used to affect a fourth component, the third component is different from the first component, the second component, and the fourth component is a non-rigid body;

a second configuration unit, configured to configure a plurality of controllers for the first component, where each controller in the plurality of controllers corresponds to one of a plurality of motion states, and the motion states corresponding to any two controllers in the plurality of controllers are different.

16. The apparatus of clause 15, wherein the apparatus further comprises:

a generation unit configured to generate animation indicating that the first component has a posture change when a rigid body attribute of the first component changes from a rigid body to a non-rigid body configured by the first configuration information after configuring a physical attribute of a target model according to the first configuration information,

the position in the process of posture change is pos ═ (1- α) × poslast+α*posanimThe rotation angle during the occurrence of the attitude change is

Figure BDA0002476864440000271

Wherein, poslastPos for switching the position of said first part before the controlleranimα is the coefficient whose value lies between 0 and 1 for the position of the first component calculated from the second component, rotlastFor the angle of rotation, rot, of said first part before switching the controlleranimTo calculate the first part from the second partRotation angle of the member theta is rotlastAnd rotanimThe included angle of (a).

17. A storage medium, wherein the storage medium comprises a stored program, wherein the program when executed performs the method of any of clauses 1 to 8 above.

18. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of clauses 1 to 8 above via the computer program.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于系统消息快速接入游戏任务的方法、系统及终端

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类