Virtual object control method and device, storage medium and electronic equipment

文档序号:146220 发布日期:2021-10-26 浏览:39次 中文

阅读说明:本技术 虚拟对象的控制方法、装置、存储介质及电子设备 (Virtual object control method and device, storage medium and electronic equipment ) 是由 夏琰 于 2021-07-23 设计创作,主要内容包括:本申请实施例公开了一种虚拟对象的控制方法、装置、存储介质及电子设备。所述方法包括:获取用户预设部位的动作数据;根据所述动作数据以及所述用户预设部位与所述虚拟对象的第一虚拟部位的第一关联关系,控制所述虚拟对象的第一虚拟部位运动;根据所述第一虚拟部位与所述虚拟对象的第二虚拟部位的第二关联关系,控制所述虚拟对象的第二虚拟部位运动。本申请实施例能够提高虚拟对象运动的和谐性和灵活性,丰富表现效果。(The embodiment of the application discloses a control method and device of a virtual object, a storage medium and electronic equipment. The method comprises the following steps: acquiring action data of a preset part of a user; controlling the first virtual part of the virtual object to move according to the action data and the first association relation between the user preset part and the first virtual part of the virtual object; and controlling the second virtual part of the virtual object to move according to the second association relation between the first virtual part and the second virtual part of the virtual object. The embodiment of the application can improve the harmony and flexibility of the movement of the virtual object and enrich the expression effect.)

1. A method for controlling a virtual object, the method comprising:

acquiring action data of a preset part of a user;

controlling the first virtual part of the virtual object to move according to the action data and the first association relation between the user preset part and the first virtual part of the virtual object;

and controlling the second virtual part of the virtual object to move according to the second association relation between the first virtual part and the second virtual part of the virtual object.

2. The method according to claim 1, wherein the first association relationship is a one-to-one correspondence relationship between the user-preset portion and a first virtual portion of the virtual object;

the second association relationship is a corresponding relationship between a first virtual part and a second virtual part of the virtual object, and a binding relationship between a motion state of the first virtual part and a motion state of the corresponding second virtual part.

3. The method of controlling a virtual object according to claim 2, wherein the motion state of the first virtual part includes a first motion parameter, the first motion parameter including at least one of: rotation parameters, opening and closing parameters, swing parameters and scaling parameters; the motion state of the second virtual location comprises a second motion parameter comprising at least one of: rotation parameters, opening and closing parameters, swing parameters and scaling parameters;

the binding relationship is a corresponding relationship between the first motion parameter and the second motion parameter.

4. The method according to claim 3, wherein the binding relationship is a proportional relationship between a degree of change of the first motion parameter and a degree of change of the second motion parameter.

5. The method according to claim 3, wherein the binding relationship is a correspondence relationship between a change frequency of the first motion parameter and a change frequency of the second motion parameter.

6. The method for controlling a virtual object according to claim 1, wherein the virtual object is constructed with a virtual model including a first bone corresponding to the first virtual site;

the controlling the first virtual part of the virtual object to move according to the action data and the first association relationship between the user preset part and the first virtual part of the virtual object includes:

determining bone data of a first bone according to the action data and the incidence relation between the preset part of the user and the first bone corresponding to the first virtual part;

controlling a first virtual part motion of the virtual object according to the bone data of the first bone.

7. The method of controlling a virtual object according to claim 6, wherein the virtual model further comprises a second bone corresponding to the second virtual location;

the controlling the second virtual part of the virtual object to move according to the second association relationship between the first virtual part and the second virtual part of the virtual object includes:

determining bone data of a second bone according to the bone data of the first bone and the incidence relation between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part;

and controlling the second virtual part of the virtual object to move according to the bone data of the second bone.

8. The method of controlling a virtual object according to claim 6 or 7, wherein the skeletal data comprises at least one of: rotating data, scaling data, moving data.

9. The method for controlling a virtual object according to claim 1, wherein the controlling of the movement of the second virtual part of the virtual object according to the second association relationship between the first virtual part and the second virtual part of the virtual object includes:

detecting the number of movements of the first virtual part;

and controlling the second virtual part of the virtual object to move according to the second association relation between the first virtual part and the second virtual part of the virtual object when the first virtual part moves for each preset number of times.

10. The method of controlling a virtual object according to claim 6, wherein the virtual model comprises at least one of: three-dimensional models, two-dimensional models.

11. The method for controlling a virtual object according to claim 1, wherein the predetermined portion is a head portion, the first virtual portion is a head portion, and the second virtual portion includes at least one of: trunk, limbs, ears, tail, wings.

12. The method for controlling a virtual object according to claim 1, wherein the predetermined portion and the first virtual portion are one of five sense organs, and the second virtual portion includes at least one of: one of the five sense organs, torso, limb, tail, wing, different from the first virtual location.

13. An apparatus for controlling a virtual object, the apparatus comprising:

the acquisition module is used for acquiring action data of a preset part of a user;

the first control module is used for controlling the first virtual part of the virtual object to move according to the action data and the first association relation between the user preset part and the first virtual part of the virtual object; and the number of the first and second groups,

and the second control module is used for controlling the second virtual part of the virtual object to move according to the second association relation between the first virtual part and the second virtual part of the virtual object.

14. A computer-readable storage medium, characterized in that it stores a computer program adapted to be loaded by a processor for performing the steps in the method of controlling a virtual object according to any one of claims 1-12.

15. An electronic device, characterized in that the electronic device comprises a memory in which a computer program is stored and a processor that executes the steps in the method of controlling a virtual object according to any one of claims 1 to 12 by calling the computer program stored in the memory.

Technical Field

The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for controlling a virtual object, a storage medium, and an electronic device.

Background

The application of virtual objects such as virtual broadcasters, virtual idols and the like is more and more popular at present, and a real experience feeling of breaking the dimension element communication is created for the vermicelli through the modes of advertisement, pronouncing, performing, live webcasting and the like. However, the existing virtual object realizes the synchronous motion of the head by capturing the head motion of the real person, and the body can only carry out preset actions, so that the overall motion of the virtual object is quite inharmonious, which results in poor flexibility and poor performance of the virtual object.

Disclosure of Invention

The embodiment of the application provides a control method and device of a virtual object, a storage medium and electronic equipment, which can improve the harmony and flexibility of the motion of the virtual object and enrich the expression effect.

The embodiment of the application provides a control method of a virtual object, which comprises the following steps:

acquiring action data of a preset part of a user;

controlling the first virtual part of the virtual object to move according to the action data and the first association relation between the user preset part and the first virtual part of the virtual object;

and controlling the second virtual part of the virtual object to move according to the second association relation between the first virtual part and the second virtual part of the virtual object.

Optionally, the first association relationship is a one-to-one correspondence relationship between the user preset part and a first virtual part of the virtual object;

the second association relationship is a corresponding relationship between a first virtual part and a second virtual part of the virtual object, and a binding relationship between a motion state of the first virtual part and a motion state of the corresponding second virtual part.

Optionally, the motion state of the first virtual location includes a first motion parameter, the first motion parameter including at least one of: rotation parameters, opening and closing parameters, swing parameters and scaling parameters; the motion state of the second virtual location comprises a second motion parameter comprising at least one of: rotation parameters, opening and closing parameters, swing parameters and scaling parameters;

the binding relationship is a corresponding relationship between the first motion parameter and the second motion parameter.

Optionally, the binding relationship is a proportional relationship between a variation degree of the first motion parameter and a variation degree of the second motion parameter.

Optionally, the binding relationship is a corresponding relationship between a change frequency of the first motion parameter and a change frequency of the second motion parameter.

Optionally, the virtual object is constructed with a virtual model comprising a first bone corresponding to the first virtual location;

the controlling the first virtual part of the virtual object to move according to the action data and the first association relationship between the user preset part and the first virtual part of the virtual object includes:

determining bone data of a first bone according to the action data and the incidence relation between the preset part of the user and the first bone corresponding to the first virtual part;

controlling a first virtual part motion of the virtual object according to the bone data of the first bone.

Optionally, the virtual model further comprises a second bone corresponding to the second virtual site;

the controlling the second virtual part of the virtual object to move according to the second association relationship between the first virtual part and the second virtual part of the virtual object includes:

determining bone data of a second bone according to the bone data of the first bone and the incidence relation between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part;

and controlling the second virtual part of the virtual object to move according to the bone data of the second bone.

Optionally, the bone data comprises at least one of: rotating data, scaling data, moving data.

Optionally, the controlling the second virtual part of the virtual object to move according to the second association relationship between the first virtual part and the second virtual part of the virtual object includes:

detecting the number of movements of the first virtual part;

and controlling the second virtual part of the virtual object to move according to the second association relation between the first virtual part and the second virtual part of the virtual object when the first virtual part moves for each preset number of times.

Optionally, the virtual model comprises at least one of: three-dimensional models, two-dimensional models.

Optionally, the preset portion is a head, the first virtual portion is a head, and the second virtual portion includes at least one of: trunk, limbs, ears, tail, wings.

Optionally, the predetermined site and the first virtual site are one of the five sense organs, and the second virtual site includes at least one of: one of the five sense organs, torso, limb, tail, wing, different from the first virtual location.

An embodiment of the present application further provides a device for controlling a virtual object, where the device includes:

the acquisition module is used for acquiring action data of a preset part of a user;

the first control module is used for controlling the first virtual part of the virtual object to move according to the action data and the first association relation between the user preset part and the first virtual part of the virtual object; and the number of the first and second groups,

and the second control module is used for controlling the second virtual part of the virtual object to move according to the second association relation between the first virtual part and the second virtual part of the virtual object.

An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, where the computer program is suitable for being loaded by a processor to perform the steps in the method for controlling a virtual object according to any of the above embodiments.

An embodiment of the present application further provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores a computer program, and the processor executes the steps in the method for controlling a virtual object according to any of the above embodiments by calling the computer program stored in the memory.

The method, the device, the storage medium and the electronic equipment for controlling the virtual object, provided by the embodiment of the application, are used for acquiring action data of a preset part of a user; controlling the first virtual part of the virtual object to move according to the action data and the incidence relation between the user preset part and the first virtual part of the virtual object; and controlling the second virtual part of the virtual object to move according to the incidence relation between the first virtual part and the second virtual part of the virtual object. When the first virtual part of the virtual object moves along with the preset part of the user, the second virtual part is controlled to move simultaneously according to the incidence relation between the first virtual part and the second virtual part in the virtual object, namely, the movement of the plurality of virtual parts of the virtual object can be controlled only by acquiring the action data of the preset part of the user, the harmony and flexibility of the movement of the virtual object are improved, and the expression effect is enriched.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a system schematic diagram of a control apparatus for a virtual object according to an embodiment of the present application.

Fig. 2 is a schematic flowchart of a control method for a virtual object according to an embodiment of the present application.

Fig. 3a is a schematic diagram illustrating an effect of opening a left eye of a three-dimensional virtual object in a control method of the virtual object according to an embodiment of the present application.

Fig. 3b is a schematic diagram illustrating an effect of opening the left eye of the two-dimensional virtual object in the control method of the virtual object according to the embodiment of the present application.

Fig. 4a is a schematic diagram illustrating an effect of looking up at a left eye of a three-dimensional virtual object in a control method of the virtual object according to the embodiment of the present application.

Fig. 4b is a schematic diagram illustrating an effect of looking up at the left eye of a two-dimensional virtual object in the control method of the virtual object according to the embodiment of the present application.

Fig. 5a is a schematic diagram illustrating an effect of looking down at a left eye of a three-dimensional virtual object in a control method of the virtual object according to the embodiment of the present application.

Fig. 5b is a schematic diagram illustrating an effect of the two-dimensional virtual object looking down by the left eye in the control method of the virtual object according to the embodiment of the present application.

Fig. 6a is a schematic diagram illustrating a first effect of a first virtual part driving a second virtual part to move in a control method of a virtual object according to an embodiment of the present application.

Fig. 6b is a schematic diagram illustrating a second effect of the first virtual part driving the second virtual part to move in the method for controlling a virtual object according to the embodiment of the present application.

Fig. 6c is a schematic diagram illustrating a third effect of the control method for a virtual object according to the embodiment of the present application, in which the first virtual part drives the second virtual part to move.

Fig. 6d is a fourth effect schematic diagram of the control method of the virtual object according to the embodiment of the present application, in which the first virtual part drives the second virtual part to move.

Fig. 7a is a schematic effect diagram of a control method of a virtual object according to an embodiment of the present application when a first virtual part and a second virtual part are not moving.

Fig. 7b is a fifth effect schematic diagram of the control method of the virtual object according to the embodiment of the present application, in which the first virtual part drives the second virtual part to move.

Fig. 7c is a sixth effect schematic diagram of the control method of the virtual object according to the embodiment of the present application, in which the first virtual part drives the second virtual part to move.

Fig. 7d is a seventh effect schematic diagram of the control method of the virtual object according to the embodiment of the present application, in which the first virtual part drives the second virtual part to move.

Fig. 7e is an eighth schematic effect diagram of the control method of the virtual object according to the embodiment of the present application, in which the first virtual part drives the second virtual part to move.

Fig. 7f is a schematic diagram illustrating a ninth effect of the control method for a virtual object according to the embodiment of the present application, in which the first virtual part drives the second virtual part to move.

Fig. 8 is another schematic flow chart of a control method of a virtual object according to an embodiment of the present application.

Fig. 9 is a schematic structural diagram of a control apparatus for a virtual object according to an embodiment of the present application.

Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The embodiment of the application provides a control method and device of a virtual object, a storage medium and electronic equipment. Specifically, the control method of the virtual object in the embodiment of the present application may be executed by an electronic device, where the electronic device may be a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and may further include a client, which may be an application client, a browser client carrying control software of a virtual object, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform.

For example, when the control method of the virtual object is executed in the terminal, the terminal device stores control software of the virtual object. The terminal device is used for interacting with a user through a graphical user interface, for example, the terminal device downloads and runs control software for installing the virtual object. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a control interface of a virtual object and receiving an operation instruction generated by a user acting on the graphical user interface, and a processor for executing control software of the virtual object, generating the graphical user interface, responding to the operation instruction, and controlling display of the graphical user interface on the touch display screen.

Referring to fig. 1, fig. 1 is a system schematic diagram of a control device of a virtual object according to an embodiment of the present disclosure. The system may include at least one terminal 1000, at least one server 2000, at least one database 3000, and a network 4000. The terminal 1000 held by the user can be connected to different servers through the network 4000. Terminal 1000 can be any device having computing hardware capable of supporting and executing software products corresponding to methods of controlling virtual objects. Terminal 1000 can include a motion capture device and/or a face capture device, such as a camera or the like, for collecting user motion data. The motion capture device and/or the face capture device may be integrated in one terminal 1000 (e.g., a smart phone, a tablet computer, a notebook computer, etc.), or may be separately configured terminals 1000, as shown in fig. 1.

In addition, when the system includes a plurality of terminals 1000, a plurality of servers 2000, and a plurality of networks 4000, different terminals 1000 may be connected to each other through different networks 4000 and through different servers 2000. The Network 4000 may be a Wireless Network or a wired Network, for example, the Wireless Network is a WLAN (Wireless Local Area Network), a LAN (Local Area Network), a cellular Network, a 2G Network, a 3G Network, a 4G Network, a 5G Network, or the like. In addition, different terminals 1000 may be connected to other terminals or a server using their own bluetooth network or hotspot network. For example, a plurality of users may be online through different terminals 1000 to be connected through an appropriate network and synchronized with each other to support a plurality of people to control a virtual object. In addition, the system may include a plurality of databases 3000, the plurality of databases 3000 being coupled to different servers 2000, and information about the virtual objects, such as action data, associations, skeletal data, etc., may be stored in the databases 3000.

The embodiment of the application provides a control method of a virtual object, which can be executed by a terminal or a server. In the embodiment of the present application, a control method of a virtual object is described as an example in which a terminal executes the control method. The terminal comprises a touch display screen and a processor, wherein the touch display screen is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. When a user operates the graphical user interface through the touch display screen, the graphical user interface can control the local content of the terminal through responding to the received operation instruction, and can also control the content of the opposite-end server through responding to the received operation instruction. For example, the operation instruction generated by the user acting on the graphical user interface comprises an instruction for starting the control software of the virtual object, and the processor is configured to start the control software of the virtual object after receiving the instruction provided by the user for starting the control software of the virtual object. A touch display screen is a multi-touch sensitive screen capable of sensing a touch or slide operation performed at a plurality of points on the screen at the same time. The user uses a finger to perform touch operation on the graphical user interface, and when the graphical user interface detects the touch operation, different virtual objects in the graphical user interface are controlled to perform actions corresponding to the touch operation. The processor may be configured to present a corresponding interface in response to an operation instruction generated by a touch operation of a user.

The following is a detailed description of specific embodiments.

In the present embodiment, description will be made from the viewpoint of a control apparatus of a virtual object, which may be specifically integrated in an electronic device such as a terminal or a server.

Referring to fig. 2, fig. 2 is a flowchart illustrating a control method for a virtual object according to an embodiment of the present invention. The specific process of the method can be as follows:

step 101, obtaining action data of a preset part of a user.

In this embodiment, the motion data of the user preset part may be collected by setting a motion capture device and/or a face capture device at the user, and after the motion data of the user preset part is collected, the motion data is transmitted to the electronic device by the motion capture device and/or the face capture device, so that the electronic device obtains the motion data of the user preset part. The user preset part refers to a key part of the user, and may include one or more parts, generally excluding all parts of the user, for example, the user preset part may be a head of the user, a face of the user (including at least one of five sense organs), and the like. The motion data refers to motion parameters corresponding to real-time motion of a preset part of the user, such as a deflection angle of the head of the user, opening and closing degrees of eyes of the user and the like.

For example, the motion capture device may be disposed on the head of the user, that is, the user preset portion is the head of the user, and the motion capture device acquires the deflection angle of the head of the user and transmits the deflection angle to the electronic device. The face capture device can be arranged on the face of the user, namely the preset part of the user is one of five sense organs (such as eyes) of the user, and the face capture device collects the opening and closing degree of the eyes of the user and transmits the opening and closing degree to the electronic device.

And step 102, controlling the first virtual part of the virtual object to move according to the action data and the first association relation between the user preset part and the first virtual part of the virtual object.

In this embodiment, the virtual object may be a virtual character as a virtual idol or a virtual anchor, and the virtual object may be a virtual character, a virtual animal, or the like. The virtual object may be rendered for display on a display interface of the electronic device or presented by holographic projection.

The association relationship between each part of the user and each virtual part of the virtual object is established in advance, and since only the motion data of the part preset by the user is acquired in step 101, only the first association relationship between the part preset by the user and the first virtual part of the virtual object may be established. The first association relationship refers to a one-to-one correspondence relationship between the user preset part and the first virtual part of the virtual object. The first virtual part may refer to the same part in the virtual object as a user-preset part, the user-preset part may include at least one (one or more) part of the user, the first virtual part may include at least one part of the virtual object, and the at least one part of the user corresponds to the at least one part of the virtual object one to one. For example, the user preset part is the head of the user, and the first virtual part is the head of the virtual object; the user preset part is the face (at least comprising one of the five sense organs, such as eyes) of the user, and the first virtual part is the face (at least comprising one of the five sense organs, such as eyes) of the virtual object.

After the action data of the preset part of the user are obtained, the first virtual part of the virtual object corresponding to the preset part of the user is determined according to the preset first association relation, so that the first virtual part of the virtual object is controlled to move according to the action data, and the virtual object is guaranteed to move along with the user. For example, head deflection of the user, head sync deflection of the virtual object; the user blinks and the virtual object blinks synchronously.

The virtual object is constructed with a virtual model, the virtual model can be a two-dimensional model or a three-dimensional model, and the virtual model can also comprise the two-dimensional model and the three-dimensional model at the same time, namely the virtual object respectively constructs the two-dimensional model and the three-dimensional model, the virtual object constructed by the two-dimensional model is a two-dimensional virtual object, and the virtual object constructed by the three-dimensional model is a three-dimensional virtual object. The two-dimensional model and the three-dimensional model can set control parameters and expression effects according to the same standard, namely the same control parameters are input into the two-dimensional model and the three-dimensional model, and the same expression effects can be achieved. For example, the motion data of the user preset part is respectively input into the two-dimensional model and the three-dimensional model, and the two-dimensional model and the three-dimensional model can be respectively controlled to synchronously move along with the user preset part. The two-dimensional model has the advantages that the image is more quadratic, the three-dimensional model has the advantages that the movement is more exquisite, the synchronization rate with the user is higher, the two-dimensional model and the three-dimensional model in the embodiment adopt the same standard, the advantages of the two models and the three-dimensional model can be enhanced, meanwhile, the data processing amount can be reduced by avoiding adopting different standards to process data.

The three-dimensional model and the two-dimensional model each include a set of base bones, each set of base bones including a first bone corresponding to the first virtual site, the first bone may include one base bone or may include a plurality of base bones. The number of underlying bones in the corresponding first bones may be different for different first virtual sites. The present embodiment implements the movement of the first virtual part by controlling the movement of the first bone.

Specifically, the controlling, according to the motion data and the first association relationship between the user preset part and the first virtual part of the virtual object, the movement of the first virtual part of the virtual object in step 102 includes: determining bone data of a first bone according to the action data and the incidence relation between the preset part and the first bone corresponding to the first virtual part; controlling a first virtual part motion of the virtual object according to the bone data of the first bone.

The association relationship between the user preset part and the first skeleton is preset, namely the corresponding relationship between the user preset part and the first skeleton is set, so that the first association relationship between the user preset part and the first virtual part of the virtual object is established. After the action data of the preset part of the user are obtained, a first skeleton associated with the preset part of the user is determined according to the association relation, and the skeleton data of the first skeleton are determined according to the action data. The conversion relationship between the motion data and the bone data of the first bone can be preset, so that after the motion data is acquired, the bone data of the first bone can be quickly determined according to the conversion relationship. The bone data of the first bone is input into the first bone, the first bone can be moved, and the movement of the first bone can realize the synchronous movement of the first virtual part of the virtual object following the preset part of the user.

The bone data of the first bone includes at least one of: rotating data, scaling data, moving data. The rotation data can be a rotation angle, the first skeleton can perform rotation according to the rotation angle, and the first virtual part can rotate according to the rotation of the first skeleton; the scaling data may be a scaling according to which the first skeleton may perform a scaling action, the first virtual part may be scaled according to the scaling action of the first skeleton; the movement data may be a movement displacement according to which the first bone may perform a movement action and the first virtual part may move according to the movement action of the first bone.

For example, when the head of the user deflects to the left, the angle of the head of the user deflecting to the left is acquired, the first virtual part of the virtual object is determined to be the head of the virtual object, the first bone corresponding to the head of the virtual object is further determined to be the head bone, meanwhile, the bone data of the head bone is determined according to the angle of the head of the user deflecting to the left, so that the bone data are input to the head bone to control the head bone to move, the head of the virtual object deflects to the left, and the deflection angle is the same as the angle of the head of the user deflecting to the left. For another example, the left eye of the user is opened (the left eyelid of the user moves upward), the degree to which the left eye of the user is opened is obtained, the first virtual part of the virtual object is determined as the left eyelid of the virtual object, the first bone corresponding to the left eyelid of the virtual object is determined as the left eyelid bone, and the bone data of the left eyelid bone is determined according to the degree to which the left eye of the user is opened. Referring to fig. 3a and 3b, fig. 3a is a schematic diagram illustrating an effect that a left eye of a three-dimensional virtual object is opened, and fig. 3b is a schematic diagram illustrating an effect that a left eye of a two-dimensional virtual object is opened.

Similarly, the right eye of the user is opened, and the right eyelid of the virtual object is moved upward, so that the right eye of the virtual object is opened synchronously. The eyes of the user look upward (the eyelids of the user move upward with respect to the normal opening of the eyes of the user, and at this time, the degree of opening of the eyes of the user is greater than the degree of opening of the normal opening of the eyes of the user), and the eyelids of the virtual object move upward so that the eyes of the virtual object look upward in synchronization. Referring to fig. 4a and 4b, fig. 4a is a schematic diagram illustrating an effect of the three-dimensional virtual object when the eyes look upward, and fig. 4b is a schematic diagram illustrating an effect of the two-dimensional virtual object when the eyes look upward. The eyes of the user look down (the eyelids of the user move down with respect to the normal opening of the eyes of the user, and at this time, the degree of opening of the eyes of the user is smaller than the degree of opening of the eyes of the user), and the eyelids of the virtual object move down, so that the eyes of the virtual object look down in synchronization. Referring to fig. 5a and 5b, fig. 5a is a schematic diagram illustrating an effect of a three-dimensional virtual object looking down by eyes, and fig. 5b is a schematic diagram illustrating an effect of a two-dimensional virtual object looking down by eyes. The user looks left and right (eyeball left and right movement), and the eyeballs of the virtual objects move left and right, so that the eyes of the virtual objects look left and right synchronously. The eyebrows of the user move up and down, and the eyebrows of the virtual object move up and down synchronously. The user opens the mouth, the mouth shape of the virtual object is enlarged, and the mouth is opened synchronously by the virtual object.

Step 103, controlling the second virtual part of the virtual object to move according to the second association relationship between the first virtual part and the second virtual part of the virtual object.

In this embodiment, the association relationship between each virtual part and another virtual part in the virtual object is established in advance, and since only the motion data of the part preset by the user is acquired in step 101, only the second association relationship between the first virtual part associated with the part preset by the user and another virtual part (i.e., the second virtual part) may be established. The second association relationship refers to a corresponding relationship between a first virtual part and a second virtual part of the virtual object, and a binding relationship between a motion state of the first virtual part and a motion state of the corresponding second virtual part. The motion state of the first virtual location comprises a first motion parameter comprising at least one of: rotation parameters, opening and closing parameters, swing parameters, scaling parameters and the like. For example, if the first virtual part is an eye, the motion state of the first virtual part includes an opening/closing parameter (first motion parameter). The motion state of the second virtual location comprises a second motion parameter comprising at least one of: rotation parameters, opening and closing parameters, swing parameters, scaling parameters and the like. For example, if the second virtual location is an ear, the motion state of the second virtual location includes a rotation parameter (second motion parameter).

The binding relationship between the motion state of the first virtual part and the motion state of the corresponding second virtual part may be a corresponding relationship between the first motion parameter and the second motion parameter. The type of the first motion parameter and the type of the corresponding second motion parameter may be different. For example, the first virtual location is a head, the motion state of the head is rotation, the first motion parameter includes a rotation parameter of the head, the corresponding second virtual location is a tail, the motion state of the tail is swing, and the corresponding second motion parameter includes a swing parameter of the tail.

The variation degree of the first motion parameter may be different from the variation degree of the second motion parameter, that is, the binding relationship between the motion state of the first virtual part and the motion state of the corresponding second virtual part may be a proportional relationship between the variation degree of the first motion parameter and the variation degree of the second motion parameter. The variation degree can be rotation angle, opening and closing size, swing amplitude, zooming size and the like. For example, the first virtual part is an eyeball, the movement state of the eyeball is rotation, the corresponding second virtual part is a trunk, the movement state of the trunk is rolling, the variation degree of the first movement parameter is that the eyeball rotates by a first angle (for example, clockwise rotation is 90 degrees), and the variation degree of the corresponding second movement parameter is that the trunk rolls by a first amplitude (for example, right rolling is 30 degrees); when the degree of change of the first motion parameter is a second degree of eyeball rotation (e.g., 45 degrees of counterclockwise rotation), the corresponding degree of change of the second motion parameter is a second amplitude of torso rolling (e.g., 15 degrees of leftward rolling).

The change frequency of the first motion parameter and the change frequency of the second motion parameter may be different, that is, the binding relationship between the motion state of the first virtual part and the motion state of the corresponding second virtual part may be a corresponding relationship between the change frequency of the first motion parameter and the change frequency of the second motion parameter. The variation frequency may be the number of rotations, the number of opens and closes, the number of swings, the number of zooms, etc. within a preset time (e.g. unit time). For example, the first virtual location is an eye, the motion state of the eye is open (blinking), the corresponding second virtual location is a tail, and the motion state of the tail is swing. When the first motion parameter is the frequency of eye opening (blinking) five times per minute (i.e., five blinks in a minute), the frequency of tail wag is one time per minute (i.e., wag tail once a minute).

The first virtual location may include at least one location(s) of the virtual object, the second virtual location may include at least one location of the virtual object, and each location in the first virtual location corresponds to at least one location in the second virtual location. The different first virtual locations, their associated second virtual locations may be the same or different. Additionally, the second virtual part may be a part where the user does not exist, or a part that is not captured by the motion capture device and/or the face capture device, or a part where the user does not wish to be synchronized. For example, the preset part of the user is the head of the user, the first virtual part is the head of the virtual object, and the second virtual part is the body and limbs of the virtual object; the user preset part is the face (at least one of the five sense organs) of the user, the first virtual part is the face (at least one of the five sense organs) of the virtual object, and the second virtual part is the ear, tail and/or wing and the like of the virtual object. The second virtual location may also be other locations, and is not limited herein.

When the first virtual part of the virtual object moves, the second virtual part associated with the first virtual part is determined according to the preset association relation, so that the second virtual part is controlled to move according to the motion state of the first virtual part, the second virtual part and the first virtual part move simultaneously, and the harmony and flexibility of the motion are ensured. For example, the head of the virtual object deflects, while the torso of the virtual object deflects; the left eye of the virtual object blinks and the left ear of the virtual object jumps.

The three-dimensional model and the two-dimensional model constructed by the virtual object respectively comprise a set of basic skeletons, each set of basic skeletons also respectively comprise a second skeleton corresponding to the second virtual part, and the second skeleton can comprise one basic skeleton or a plurality of basic skeletons. The number of underlying bones in the second bone may be different for different second virtual sites. The present embodiment implements the movement of the second virtual site by controlling the movement of the second bone.

Specifically, the controlling the second virtual part of the virtual object according to the second association relationship between the first virtual part and the second virtual part of the virtual object in step 103 includes: determining bone data of a second bone according to the bone data of the first bone and a preset incidence relation between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part; and controlling the second virtual part of the virtual object to move according to the bone data of the second bone.

The association relationship between a first bone corresponding to a first virtual part and a second bone corresponding to a second virtual part, that is, the association relationship between the first bone and the second bone is preset, so that a second association relationship between the first virtual part and the second virtual part of the virtual object is established. After the bone data of the first bone is determined, the action of the first bone is controlled through the bone data of the first bone, a second bone related to the first bone is determined according to the association relation, and the bone data of the second bone is determined according to the bone data of the first bone. Wherein, the conversion relation between the bone data of the first bone and the bone data of the second bone can be preset, so that after the bone data of the first bone is determined, the bone data of the second bone can be rapidly determined according to the conversion relation. The second bone can be moved by inputting the bone data of the second bone into the second bone, and the movement of the second bone can realize the movement of the second virtual part of the virtual object.

The bone data of the second bone includes at least one of: rotating data, scaling data, moving data. The rotation data can be a rotation angle, the second skeleton can perform rotation according to the rotation angle, and the second virtual part can rotate according to the rotation of the second skeleton; the scaling data may be a scaling according to which the second skeleton may perform a scaling action, the second virtual part may be scaled according to the scaling action of the second skeleton; the movement data may be a movement displacement according to which the second bone may perform a movement action, and the second virtual part may move according to the movement action of the second bone.

The bone data of the first bone may be of a different data type than the bone data of the second bone, for example, the bone data of the first bone may be movement data, the bone data of the second bone may be rotation data, the bone data of the first bone may be rotation data, and the bone data of the second bone may be rotation data and movement data.

The rotation data, scaling data and/or movement data of the first bone may control a motion state of the first virtual site, and the rotation data, scaling data and/or movement data of the second bone may control a motion state of the second virtual site. The binding relationship between the motion parameters of the first virtual part and the motion parameters of the second virtual part can be determined by presetting the conversion relationship between the rotation data, the scaling data and/or the movement data of the first skeleton and the rotation data, the scaling data and/or the movement data of the second skeleton, so that when the motion parameters of the first virtual part are changed, the motion parameters of the second virtual part can be driven to be changed, the motion of the first virtual part is realized, and the motion of the second virtual part is driven.

For example, the first virtual part of the virtual object is the head of the virtual object, the first skeleton corresponding to the head of the virtual object is the head skeleton, and the head skeleton is associated with the body skeleton and the limb skeleton, the second virtual part of the virtual object is determined to be the body and the limb (limb) of the virtual object, the head of the virtual object deflects to the left, and the body and the limb of the virtual object are controlled to deflect to the left according to the binding relationship between the head deflection angle of the virtual object and the deflection angle of the body and the limb of the virtual object, but the deflection angle of the body and the limb is smaller than the deflection angle of the head, as shown in fig. 6 a. Similarly, when the head of the virtual object leans backwards, the trunk of the virtual object is controlled to lean backwards slightly according to the binding relationship between the head leaning angle of the virtual object and the trunk leaning angle of the virtual object, as shown in fig. 6 b; when the head of the virtual object tilts forward, controlling the trunk to tilt forward slightly according to the binding relationship between the head forward-tilting angle of the virtual object and the trunk forward-tilting angle of the virtual object, as shown in fig. 6 c; when the head of the virtual object deflects to the upper left, the trunk is controlled to slightly tilt to the upper left according to the binding relationship between the upper left deflection angle of the head of the virtual object and the upper left deflection angle of the trunk of the virtual object, as shown in fig. 6 d.

It should be noted that the motion amplitudes of the body and the limbs are generally smaller than the motion amplitude of the head, and the ratio of the motion amplitude of the head to the motion amplitudes of the body and the limbs may be 3: 1. for example, the range of the amplitude of the side-to-side movement of the head is-30 to 30, -30 refers to the extreme position of the head deflected to the left, 30 refers to the extreme position of the head deflected to the right, and the range of the amplitude of the side-to-side movement of the torso and limbs corresponds to-10 to 10.

For another example, the first virtual region of the virtual object is a binocular eyelid of the virtual object, the first bones corresponding to the binocular eyelid of the virtual object are eyelid bones (including a left eyelid bone and a right eyelid bone), the left eyelid bone is associated with a left ear bone, the right eyelid bone is associated with a right ear bone, and the second virtual region of the virtual object is determined to be both ears of the virtual object. The eyes of the virtual object are normally open and the ears of the virtual object are normally upright, as shown in fig. 7 a; the left eye skin of the virtual subject moves up and down (the left eye blinks) and the left ear of the virtual subject bends downward, as shown in fig. 7 b; the right eyelid of the virtual object moves up and down (right eye blinks) and the right ear of the virtual object bends downward, as shown in fig. 7 c; the eyelids of the virtual object move down (eyes are closed), and the ears of the virtual object bend, as shown in fig. 7 d; the left skin of the virtual subject continues to move upward relative to the normal open eye (the left eye is open to the limit) and the left ear of the virtual subject is upright to the limit, as shown in fig. 7 e; the eyelids of the virtual object continue to move upward (the eyes are open to the limit) relative to the normally open eyes, and the ears of the virtual object stand up to the limit, as shown in fig. 7 f.

It should be noted that each of the left and right ear bones may include a plurality of basic bones, for example, the left ear bone includes a father bone and two son bones, and the left ear skin bone is associated with the three basic bones so as to control the actions of the three basic bones, thereby ensuring that the movement of the left ear is more flexible rather than rigid movement.

In some embodiments, movement of a first virtual location necessarily triggers movement of its associated second virtual location. In other embodiments, the random parameter may be set such that movement of a first virtual location randomly triggers movement of its associated second virtual location, i.e., movement of the first virtual location may or may not sometimes trigger movement of its associated second virtual location.

Specifically, the controlling, according to a preset association relationship between the first virtual part and a second virtual part of the virtual object, a second virtual part of the virtual object to move in step 103 includes: detecting the number of movements of the first virtual part; and when the first virtual part moves for each preset number of times, controlling the second virtual part of the virtual object to move according to the preset incidence relation between the first virtual part and the second virtual part of the virtual object.

When the first virtual part moves frequently, if the associated second virtual part also moves all the time, visual fatigue may be caused, so that the embodiment sets a preset number of times, the movement of the second virtual part may be triggered only when the first virtual part moves for the preset number of times, and if the movement number of the first virtual part does not reach the preset number of times, the movement of the second virtual part may not be triggered.

For example, the first virtual location is a binocular eyelid of the virtual object, which frequently moves up and down (caused by frequent blinking of the user), and the second virtual location is both ears of the virtual object. If the ears of the virtual object move all the time when the eyes blink, the visual fatigue is caused, so the preset times are set to be 3 to 5 times, namely, the ears of the virtual object can bounce once every time the eyes blink 3 to 5 times, and the interest is increased.

All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.

According to the control method of the virtual object, action data of a preset part of a user are obtained; controlling the first virtual part of the virtual object to move according to the action data and the incidence relation between the user preset part and the first virtual part of the virtual object; and controlling the second virtual part of the virtual object to move according to the incidence relation between the first virtual part and the second virtual part of the virtual object. When the first virtual part of the virtual object moves along with the preset part of the user, the second virtual part is controlled to move simultaneously according to the incidence relation between the first virtual part and the second virtual part in the virtual object, namely, the movement of the plurality of virtual parts of the virtual object can be controlled only by acquiring the action data of the preset part of the user, the harmony and flexibility of the movement of the virtual object are improved, and the expression effect is enriched.

Referring to fig. 8, fig. 8 is another schematic flow chart of a control method for a virtual object according to an embodiment of the present disclosure. The specific process of the method can be as follows:

step 201, constructing a virtual model for a virtual object, wherein the virtual object comprises a first virtual part and a second virtual part, and the virtual model comprises a first bone corresponding to the first virtual part and a second bone corresponding to the second virtual part.

For example, software such as Live2D or 3D, CG is used to create a virtual model of a virtual object, so that the virtual model can be a three-dimensional model or a two-dimensional model, the three-dimensional model creates a virtual object as a three-dimensional virtual object, and the two-dimensional model creates a virtual object as a two-dimensional virtual object.

Step 202, setting an association relationship between a preset part of a user and a first skeleton corresponding to the first virtual part.

The user preset portion and the first virtual portion may be the same portion, for example, the user preset portion is the mouth of the user, and the first virtual portion is the mouth of the virtual object.

Step 203, setting the association relationship between the first skeleton corresponding to the first virtual part and the second skeleton corresponding to the second virtual part.

The first virtual location is a different location than the second virtual location, for example, the first virtual location is a mouth of the virtual object and the second virtual location is a tail of the virtual object.

Step 204, obtaining action data of a part preset by a user.

For example, motion data of the user's mouth, such as the degree of openness of the user's mouth, is acquired.

Step 205, determining bone data of a first bone according to the action data and the incidence relation between the preset part of the user and the first bone corresponding to the first virtual part.

For example, according to the opening degree of the mouth of the user, the bone data of the mouth bone corresponding to the mouth of the virtual object is determined.

Step 206, controlling the movement of the first virtual part of the virtual object according to the bone data of the first bone.

For example, according to the bone data of the mouth bone, the mouth of the virtual object is controlled to move synchronously with the mouth of the user, namely, the mouth of the virtual object is opened to the same extent as the mouth of the user.

And step 207, determining bone data of a second bone according to the bone data of the first bone and the incidence relation between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part.

For example, the bone data of the tail bone corresponding to the tail of the virtual object is determined according to the bone data of the mouth bone corresponding to the mouth of the virtual object and the conversion relation between the bone data of the mouth bone and the bone data of the tail bone.

And step 208, controlling the movement of the second virtual part of the virtual object according to the bone data of the second bone.

For example, the tail swing of the virtual object is controlled according to the bone data of the tail bone, so that the mouth and tail movement of the virtual object can be controlled simultaneously by acquiring the motion data of the mouth of the user, and the harmony and flexibility of the movement of the virtual object are improved.

All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.

According to the control method of the virtual object, when the first virtual part of the virtual object moves along with the preset part of the user, the second virtual part is controlled to move simultaneously according to the incidence relation between the first virtual part and the second virtual part in the preset virtual object, namely, the movement of a plurality of virtual parts of the virtual object can be controlled only by acquiring the action data of the preset part of the user, the harmony and flexibility of the movement of the virtual object are improved, and the expression effect is enriched.

In order to better implement the method for controlling a virtual object according to the embodiments of the present application, an embodiment of the present application further provides a device for controlling a virtual object. Referring to fig. 9, fig. 9 is a schematic structural diagram of a control device for a virtual object according to an embodiment of the present disclosure. The control apparatus 300 of the virtual object may include:

an obtaining module 301, configured to obtain motion data of a preset part of a user;

a first control module 302, configured to control a first virtual part of the virtual object to move according to the motion data and a first association relationship between the user preset part and the first virtual part of the virtual object;

a second control module 303, configured to control a second virtual part of the virtual object to move according to a second association relationship between the first virtual part and the second virtual part of the virtual object.

In this embodiment, the motion data of the user preset part may be collected by setting a motion capture device and/or a face capture device at the user, and after the motion data of the user preset part is collected, the motion data is transmitted to the electronic device by the motion capture device and/or the face capture device, so that the electronic device obtains the motion data of the user preset part. The user preset part refers to a key part of a user, and may include one or more parts, generally excluding all parts of the user, for example, the user preset part may be a head of the user, a face of the user, or the like. The motion data refers to motion parameters corresponding to real-time motion of a preset part of the user, such as a deflection angle of the head of the user, opening and closing degrees of eyes of the user and the like.

The association relationship between each part of the user and each virtual part of the virtual object is established in advance, and since only the motion data of the part preset by the user is acquired in step 101, only the association relationship between the part preset by the user and the first virtual part of the virtual object may be established. The first virtual part refers to a part of the virtual object that is the same as a part preset by a user, and the first virtual part may include one or more parts of the virtual object.

After the action data of the preset part of the user are obtained, the first virtual part of the virtual object associated with the preset part of the user is determined according to the preset association relation, so that the first virtual part of the virtual object is controlled to move according to the action data, and the virtual object is guaranteed to move along with the user.

The association relationship between each virtual part and other virtual parts in the virtual object is established in advance, and since only the motion data of the part preset by the user is acquired in step 101, only the association relationship between the first virtual part associated with the part preset by the user and other virtual parts (i.e., the second virtual part) may be established. Wherein the second virtual location may include one or more locations of the virtual object, and the associated second virtual locations may be the same or different for different first virtual locations.

When the first virtual part of the virtual object moves, the second virtual part associated with the first virtual part is determined according to the preset association relation, so that the second virtual part is controlled to move according to the motion state of the first virtual part, the second virtual part and the first virtual part move simultaneously, and the harmony and flexibility of the motion are ensured.

Optionally, the first association relationship is a one-to-one correspondence relationship between the user preset part and a first virtual part of the virtual object;

the second association relationship is a corresponding relationship between a first virtual part and a second virtual part of the virtual object, and a binding relationship between a motion state of the first virtual part and a motion state of the corresponding second virtual part.

Optionally, the motion state of the first virtual location includes a first motion parameter, the first motion parameter including at least one of: rotation parameters, opening and closing parameters, swing parameters and scaling parameters; the motion state of the second virtual location comprises a second motion parameter comprising at least one of: rotation parameters, opening and closing parameters, swing parameters and scaling parameters;

the binding relationship is a corresponding relationship between the first motion parameter and the second motion parameter.

Optionally, the binding relationship is a proportional relationship between a variation degree of the first motion parameter and a variation degree of the second motion parameter.

Optionally, the binding relationship is a corresponding relationship between a change frequency of the first motion parameter and a change frequency of the second motion parameter.

Optionally, the virtual object is constructed with a virtual model comprising a first bone corresponding to the first virtual location;

the first control module 302 is further configured to:

determining bone data of a first bone according to the action data and the incidence relation between the preset part of the user and the first bone corresponding to the first virtual part;

controlling a first virtual part motion of the virtual object according to the bone data of the first bone.

Optionally, the virtual model further comprises a second bone corresponding to the second virtual site;

the second control module 303 is further configured to:

determining bone data of a second bone according to the bone data of the first bone and the incidence relation between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part;

and controlling the second virtual part of the virtual object to move according to the bone data of the second bone.

Optionally, the bone data comprises at least one of: rotating data, scaling data, moving data.

Optionally, the second control module 303 is further configured to:

detecting the number of movements of the first virtual part;

and controlling the second virtual part of the virtual object to move according to the second association relation between the first virtual part and the second virtual part of the virtual object when the first virtual part moves for each preset number of times.

Optionally, the virtual model comprises at least one of: three-dimensional models, two-dimensional models.

Optionally, the preset portion is a head, the first virtual portion is a head, and the second virtual portion includes at least one of: trunk, limbs, ears, tail, wings.

Optionally, the predetermined site and the first virtual site are one of the five sense organs, and the second virtual site includes at least one of: a torso, limb, tail, wing, different from one of the five sense organs of the first virtual part.

All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.

The control device of the virtual object, provided by the embodiment of the application, acquires action data of a preset part of a user; controlling the first virtual part of the virtual object to move according to the action data and the incidence relation between the user preset part and the first virtual part of the virtual object; and controlling the second virtual part of the virtual object to move according to the incidence relation between the first virtual part and the second virtual part of the virtual object. When the first virtual part of the virtual object moves along with the preset part of the user, the second virtual part is controlled to move simultaneously according to the incidence relation between the first virtual part and the second virtual part in the virtual object, namely, the movement of the plurality of virtual parts of the virtual object can be controlled only by acquiring the action data of the preset part of the user, the harmony and flexibility of the movement of the virtual object are improved, and the expression effect is enriched.

Correspondingly, the embodiment of the present application further provides an electronic device, where the electronic device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. As shown in fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer-readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. Those skilled in the art will appreciate that the electronic device configurations shown in the figures do not constitute limitations of the electronic device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.

The processor 401 is a control center of the electronic device 400, connects various parts of the whole electronic device 400 by using various interfaces and lines, performs various functions of the electronic device 400 and processes data by running or loading software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device 400.

In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, so as to implement various functions:

acquiring action data of a preset part of a user; controlling the first virtual part of the virtual object to move according to the action data and the incidence relation between the user preset part and the first virtual part of the virtual object; and controlling the second virtual part of the virtual object to move according to the incidence relation between the first virtual part and the second virtual part of the virtual object.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Optionally, as shown in fig. 10, the electronic device 400 further includes: touch-sensitive display screen 403, radio frequency circuit 404, audio circuit 405, input unit 406 and power 407. The processor 401 is electrically connected to the touch display screen 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power source 407. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 10 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

The touch display screen 403 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 401, and can receive and execute commands sent by the processor 401. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel may transmit the touch operation to the processor 401 to determine the type of the touch event, and then the processor 401 may provide a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 403 may also be used as a part of the input unit 406 to implement an input function.

In the embodiment of the present application, a graphical user interface is generated on the touch display screen 403 by the processor 401 executing animation generation software. The touch display screen 403 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.

The rf circuit 404 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices via wireless communication, and for transceiving signals with the network device or other electronic devices.

The audio circuit 405 may be used to provide an audio interface between the user and the electronic device through a speaker, microphone. The audio circuit 405 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 405 and converted into audio data, which is then processed by the audio data output processor 401 and then transmitted to, for example, another electronic device via the rf circuit 404, or the audio data is output to the memory 402 for further processing. The audio circuit 405 may also include an earbud jack to provide communication of a peripheral headset with the electronic device.

The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.

The power supply 407 is used to power the various components of the electronic device 400. Optionally, the power source 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 407 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.

Although not shown in fig. 10, the electronic device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.

In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.

To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any one of the methods for controlling a virtual object provided in the present application. For example, the computer program may perform the steps of:

acquiring action data of a preset part of a user; controlling the first virtual part of the virtual object to move according to the action data and the incidence relation between the user preset part and the first virtual part of the virtual object; and controlling the second virtual part of the virtual object to move according to the incidence relation between the first virtual part and the second virtual part of the virtual object.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.

Since the computer program stored in the storage medium can execute the steps in any method for controlling a virtual object provided in the embodiments of the present application, beneficial effects that can be achieved by any method for controlling a virtual object provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.

The foregoing detailed description is directed to a method, an apparatus, a storage medium, and an electronic device for controlling a virtual object provided in an embodiment of the present application, and a specific example is applied in the detailed description to explain the principle and an implementation of the present application, and the description of the foregoing embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于Jump Point Search算法的寻路方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类