Interactive game generation method and device, interactive game processing method and device and electronic equipment

文档序号:1148893 发布日期:2020-09-15 浏览:32次 中文

阅读说明:本技术 互动游戏生成方法和装置、处理方法和装置及电子设备 (Interactive game generation method and device, interactive game processing method and device and electronic equipment ) 是由 吴峰 田一为 于 2019-03-08 设计创作,主要内容包括:公开了一种互动游戏生成方法、互动游戏处理方法、互动游戏生成装置、互动游戏处理装置和电子设备。该互动游戏生成方法包括:识别第一用户的动作,所述第一用户的动作包括所述第一用户的表情、手势和姿态的至少其中之一;基于所述第一用户的动作的识别结果确定互动游戏中的运动对象的属性;以及,基于所述运动对象的属性生成所述互动游戏。这样,实现了用户之间的互动,提高了用户进行互动游戏的体验。(Disclosed are an interactive game generation method, an interactive game processing method, an interactive game generation device, an interactive game processing device and an electronic device. The interactive game generation method comprises the following steps: identifying a first user's action, the first user's action comprising at least one of an expression, a gesture, and a gesture of the first user; determining attributes of a moving object in an interactive game based on a recognition result of the first user's action; and generating the interactive game based on the attributes of the moving object. Therefore, interaction among users is realized, and the experience of the users in interactive games is improved.)

1. An interactive game generation method, comprising:

identifying a first user's action, the first user's action comprising at least one of an expression, a gesture, and a gesture of the first user;

determining attributes of a moving object in an interactive game based on a recognition result of the first user's action; and

generating the interactive game based on attributes of the moving object.

2. An interactive game generation method as claimed in claim 1, wherein identifying the action of the first user comprises:

recording a video of the action performed by the first user;

in the video recording process, identifying the action of the first user; and

playing the video containing the first user's actions.

3. The interactive game generation method of claim 1, wherein determining attributes of a moving object in an interactive game based on the recognition result of the first user's action comprises:

determining a predetermined attribute of a moving object in an interactive game based on a recognition result of the action of the first user; and

determining other attributes of the moving object than the predetermined attribute in a default manner.

4. The interactive game generation method of claim 1, wherein generating the interactive game based on the attributes of the moving object comprises:

generating the interactive game based on the attributes of the moving object and other moving objects that are randomly generated.

5. The interactive game generation method of claim 1, further comprising:

determining a point in time for each of a plurality of actions of the first user; and

generating the interactive game based on the attributes of the moving object includes:

and generating the interactive game based on the time point of each action and the attribute of the moving object corresponding to each action.

6. An interactive game generation method as claimed in claim 1, wherein identifying the action of the first user comprises:

performing action pre-recognition on the first user; and

and responding to successful action pre-recognition, and starting to recognize the action of the first user.

7. The interactive game generation method of claim 1, further comprising, after generating the interactive game based on the attributes of the moving object:

and presenting the generated interactive game to the first user through a game presentation page.

8. An interactive game generation method according to claim 7, wherein the game presentation page comprises at least one of:

a thumbnail of the generated interactive game;

at least a portion of the attributes of the moving object; and

saving a save option of the generated interactive game.

9. The interactive game generation method of claim 1, further comprising, after generating the interactive game based on the attributes of the moving object:

and presenting the generated interactive game in a preview mode.

10. The interactive game generation method of claim 1, further comprising, after generating the interactive game based on the attributes of the moving object:

assigning, by the first user, the generated interactive game to a specific user or a non-specific user.

11. An interactive game processing method, comprising:

acquiring an interactive game generated according to any one of claims 1 to 10;

identifying an action of a second user;

determining whether to correspond to a condition of the moving object based on a recognition result of the motion of the second user; and

obtaining a processing result of the interactive game based on the determination result.

12. An interactive game processing method as recited in claim 11, further comprising:

after the interactive game is finished, presenting the processing result to the second user in a result presentation page, wherein the result presentation page further comprises a generation option for the second user to generate the interactive game.

13. An interactive game generation apparatus, comprising:

the motion recognition unit is used for recognizing the motion of a first user, wherein the motion of the first user comprises at least one of expression, gesture and gesture of the first user;

an attribute determination unit for determining an attribute of a moving object in an interactive game based on a recognition result of the motion of the first user; and

a game generation unit for generating the interactive game based on the attributes of the moving object.

14. An interactive game processing apparatus, comprising:

an acquisition unit for acquiring the generated interactive game according to claim 13;

an identifying unit for identifying an action of a second user;

a determination unit configured to determine whether or not to correspond to a condition of the moving object based on a recognition result of the motion of the second user; and

and the processing unit is used for obtaining the processing result of the interactive game based on the determination result.

15. An electronic device, comprising:

a processor; and

a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the interactive game generation method of any one of claims 1-10.

16. An electronic device, comprising:

a processor; and

a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the interactive game processing method of claim 11 or 12.

Technical Field

The present application relates to the field of interactive game processing, and more particularly, to an interactive game generation method, an interactive game processing method, an interactive game generation apparatus, an interactive game processing apparatus, and an electronic device.

Background

In this year, leisure intelligence-promoting interactive mini-games are popular, for example, intelligent spelling-promoting mini-games, such as interactive question and answer, meaning of guessing, and the like. Moreover, the interactive mini-games are long-standing interactive items in the comprehensive program.

When the fruit cutting game is popular in the previous years, the fruit cutting game is operated by finger sliding touch on a mobile phone terminal, and the body motion is supported by the body feeling game on a large-screen terminal to play the fruit cutting game. However, the game segments of these games are customized, that is, the participating users are interacting with the game set by the system.

That is, since the user only interacts with the system all the time during the game playing process, the interaction experience of the user is influenced to some extent.

Accordingly, it is desirable to provide improved interactive game play scenarios.

Disclosure of Invention

The present application is proposed to solve the above-mentioned technical problems. Embodiments of the present application provide an interactive game generation method, an interactive game processing method, an interactive game generation device, an interactive game processing device, and an electronic device, which determine attributes of a moving object in an interactive game by recognizing a user's motion and generate the interactive game accordingly, thereby realizing interaction between users and improving user experience in the interactive game.

According to an aspect of the present application, there is provided an interactive game generating method, including: identifying a first user's action, the first user's action comprising at least one of an expression, a gesture, and a gesture of the first user; determining attributes of a moving object in an interactive game based on a recognition result of the first user's action; and generating the interactive game based on the attributes of the moving object.

In the above interactive game generating method, recognizing the action of the first user includes: recording a video of the action performed by the first user; in the video recording process, identifying the action of the first user; and playing the video containing the first user's action.

In the above interactive game generation method, determining the attribute of the moving object in the interactive game based on the recognition result of the motion of the first user includes: determining a predetermined attribute of a moving object in an interactive game based on a recognition result of the action of the first user; and determining other attributes of the moving object except the predetermined attribute in a default manner.

In the above interactive game generation method, generating the interactive game based on the attribute of the moving object includes: generating the interactive game based on the attributes of the moving object and other moving objects that are randomly generated.

In the above interactive game generating method, the method further includes: determining a point in time for each of a plurality of actions of the first user; and generating the interactive game based on the attributes of the moving object comprises: and generating the interactive game based on the time point of each action and the attribute of the moving object corresponding to each action.

In the above interactive game generating method, recognizing the action of the first user includes: performing action pre-recognition on the first user; and in response to the action pre-recognition being successful, beginning to recognize an action of the first user.

In the above interactive game generating method, after generating the interactive game based on the attribute of the moving object, the method further includes: and presenting the generated interactive game to the first user through a game presentation page.

In the above interactive game generating method, the game presentation page includes at least one of: a thumbnail of the generated interactive game; at least a portion of the attributes of the moving object; and saving the saving option of the generated interactive game.

In the above interactive game generating method, after generating the interactive game based on the attribute of the moving object, the method further includes: and presenting the generated interactive game in a preview mode.

In the above interactive game generating method, after generating the interactive game based on the attribute of the moving object, the method further includes: assigning, by the first user, the generated interactive game to a specific user or a non-specific user.

According to another aspect of the present application, there is provided an interactive game processing method, including: acquiring the interactive game generated as described above; identifying an action of a second user; determining whether to correspond to a condition of the moving object based on a recognition result of the motion of the second user; and obtaining a processing result of the interactive game based on the determination result.

In the above interactive game processing method, the method further comprises: after the interactive game is finished, presenting the processing result to the second user in a result presentation page, wherein the result presentation page further comprises a generation option for the second user to generate the interactive game.

According to still another aspect of the present application, there is provided an interactive game generating apparatus including: the motion recognition unit is used for recognizing the motion of a first user, wherein the motion of the first user comprises at least one of expression, gesture and gesture of the first user; an attribute determination unit for determining an attribute of a moving object in an interactive game based on a recognition result of the motion of the first user; and a game generation unit for generating the interactive game based on the attributes of the moving object.

In the above interactive game generating apparatus, the motion recognition unit is configured to: recording a video of the action performed by the first user; in the video recording process, identifying the action of the first user; and playing the video containing the first user's action.

In the above interactive game generating apparatus, the attribute determining unit is configured to: determining a predetermined attribute of a moving object in an interactive game based on a recognition result of the action of the first user; and determining other attributes of the moving object except the predetermined attribute in a default manner.

In the above interactive game generating apparatus, the game generating unit is configured to: generating the interactive game based on the attributes of the moving object and other moving objects that are randomly generated.

In the above interactive game generating apparatus, the interactive game generating apparatus further includes: a time determination unit for determining a time point of each of a plurality of actions of the first user; and the game generation unit is configured to: and generating the interactive game based on the time point of each action and the attribute of the moving object corresponding to each action.

In the above interactive game generating apparatus, the motion recognition unit is configured to: performing action pre-recognition on the first user; and in response to the action pre-recognition being successful, beginning to recognize an action of the first user.

In the above interactive game generating apparatus, the interactive game generating apparatus further includes: and the game presentation unit is used for presenting the generated interactive game to the first user through a game presentation page after the interactive game is generated based on the attributes of the moving object.

In the above interactive game generating apparatus, the game presentation page includes at least one of: a thumbnail of the generated interactive game; at least a portion of the attributes of the moving object; and saving the saving option of the generated interactive game.

In the above interactive game generating apparatus, the interactive game generating apparatus further includes: and the game preview unit is used for generating the interactive game based on the attributes of the moving object and presenting the generated interactive game in a preview mode.

In the above interactive game generating apparatus, the interactive game generating apparatus further includes: a game specifying unit configured to, after generating the interactive game based on the attribute of the moving object, specify the generated interactive game to a specific user or a non-specific user by the first user.

According to still another aspect of the present application, there is provided an interactive game processing apparatus including: an acquisition unit configured to acquire the generated interactive game; an identifying unit for identifying an action of a second user; a determination unit configured to determine whether or not to correspond to a condition of the moving object based on a recognition result of the motion of the second user; and a processing unit for obtaining a processing result of the interactive game based on the determination result.

In the above interactive game processing apparatus, further comprising: and the presentation unit is used for presenting the processing result to the second user by a result presentation page after the interactive game is finished, and the result presentation page further comprises a generation option for the second user to generate the interactive game.

According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the interactive game generation method as described above.

According to yet another aspect of the present application, there is provided an electronic device including: a processor; and a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the interactive game processing method as described above.

According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the interactive game generation method as described above.

According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the interactive game processing method as described above.

According to the interactive game generation method, the interactive game processing method, the interactive game generation device, the interactive game processing device and the electronic equipment, the attributes of the moving objects in the interactive game are determined by identifying the actions of the user, the interactive game is generated correspondingly, interaction among the users is achieved, and the experience of the users in the interactive game is improved.

Drawings

The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.

FIG. 1 illustrates a flow chart of an interactive game generation method according to an embodiment of the present application.

FIG. 2 illustrates a schematic diagram of a game presentation page of an interactive game production method according to an embodiment of the present application.

FIG. 3 illustrates a flow chart of an interactive game processing method according to an embodiment of the present application.

Fig. 4 is a schematic diagram illustrating an application example of an interactive game generation method and an interactive game processing method according to an embodiment of the present application.

Fig. 5 illustrates a block diagram of an interactive game generation apparatus according to an embodiment of the present application.

FIG. 6 illustrates a block diagram of an interactive game processing apparatus according to an embodiment of the present application.

FIG. 7 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.

Fig. 8 illustrates an exemplary cloud architecture according to an embodiment of the present application.

Fig. 9 illustrates a schematic diagram of abstraction functional layers of a cloud architecture according to an embodiment of the present application.

Detailed Description

Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.

Summary of the application

As described above, in the current interactive game, users are all participating in the game links customized by the system in advance, which results in poor interactive experience of users.

In view of the above technical problems, the basic idea of the present application is to determine attributes of moving objects in an interactive game by recognizing a user's motion and generate the interactive game accordingly, so that the user can set the interactive game by himself by making the motion.

Specifically, the interactive game generation method, the interactive game generation apparatus, and the electronic device provided by the application first identify an action of a first user, where the action of the first user includes at least one of an expression, a gesture, and a posture of the first user, then determine an attribute of a moving object in an interactive game based on an identification result of the action of the first user, and finally generate the interactive game based on the attribute of the moving object.

Accordingly, the interactive game processing method, the interactive game processing apparatus, and the electronic device provided by the present application first acquire the generated interactive game as described above, then recognize the action of the second user, determine whether to correspond to the condition of the moving object based on the recognition result of the action of the second user, and finally obtain the processing result of the interactive game based on the determination result.

Therefore, the interactive links of the game can be set by using intelligent technologies such as facial expression recognition, gesture action recognition, limb posture recognition and the like, so that the user can obtain the experience of the motion sensing game in the process of generating the interactive game.

In addition, at least part of interaction links in the interactive game are set by the user, so that the defect that the interaction links of the traditional indirect confrontation type game are set by a system is overcome, the interaction among the users is realized, and the experience of the users in the interactive game is improved.

In addition, as the user participates in the generation of the interactive game, the user has more power to promote the generated interactive game, thereby utilizing the social relationship and the confrontational psychology of the user and other users, settling and visualizing the relationship and result data among the interactive users, stimulating and guiding social communication.

It should be noted that, in the interactive game generation method, the interactive game processing method, the interactive game generation device, the interactive game processing device and the electronic device provided in the present application, the interactive game may be a game in which users interact with each other through actions, and is not limited to the fruit cutting game described above, but may also include, for example, a land mouse, a sports game, and the like.

Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.

Exemplary method

FIG. 1 illustrates a flow chart of an interactive game generation method according to an embodiment of the present application.

As shown in fig. 1, an interactive game generation method according to an embodiment of the present application includes: s110, recognizing the action of a first user, wherein the action of the first user comprises at least one of the expression, the gesture and the posture of the first user; s120, determining the attribute of the moving object in the interactive game based on the identification result of the action of the first user; and S130, generating the interactive game based on the attributes of the moving object.

In step S110, a motion of a first user is recognized, where the motion of the first user includes at least one of an expression, a gesture, and a posture of the first user. Here, the first user is a user who wants to generate an interactive game, and a process of the first user generating the interactive game may be colloquially referred to as "getting out".

Specifically, in the embodiment of the present application, a request for "calling out" of the first user, that is, for generating the interactive game, may be first received. Then, in response to the request, a viewfinder window is opened while the first user is prompted to make any action.

Here, the motion of the first user may include any expression, gesture, or posture. Accordingly, the motion of the first user may be recognized through expression recognition, gesture recognition, and gesture recognition methods. The human face is scanned, the feature points are tracked and extracted, whether a specific expression is made or not can be judged based on an algorithm of point change, and therefore the expression is identified. And whether the user makes a specific gesture action or not can be judged by detecting and tracking the hand skeleton points of the user. And moreover, by scanning and tracking key points of the limb joints of the people and extracting point position characteristics, whether a specific gesture is made can be calculated and judged.

In order to make the user clearly understand the action made by the user and facilitate the user to review the action after the action is made when the action of the first user is recognized, a video of the action made by the first user can be recorded and played to the user while the user is making the action. Here, the video may be played to the user while recording the video, and may further include a picture of the interactive game itself while recording and playing the video, for example, a video in which a user action is simultaneously presented in a picture-in-picture form and a picture of the interactive game itself. In addition, the video may further include a special effect picture, background sound, and the like.

That is, in the interactive game generating method according to the embodiment of the present application, the recognizing the action of the first user includes: recording a video of the action performed by the first user; in the video recording process, identifying the action of the first user; and playing the video containing the first user's action.

In addition, before the recognition of the motion of the first user is formally started, in order to improve the accuracy of the motion recognition, the first user may be first subjected to motion pre-recognition. For example, the user may be guided to perform face recognition by presenting a face prompt box, and scanning the face points is started to determine whether the facial feature vectors can be extracted. If facial feature vectors are extracted, i.e. it is determined that the recognition is successful, the face recognition of the user can be started. If the recognition is unsuccessful, the user can be correspondingly prompted to be far away or near a little away, possibly because the distance between the face and the screen is not appropriate, and the recognition is carried out again.

That is, in the interactive game generating method according to the embodiment of the present application, the recognizing the action of the first user includes: performing action pre-recognition on the first user; and in response to the action pre-recognition being successful, beginning to recognize an action of the first user.

In step S120, attributes of the moving object in the interactive game are determined based on the recognition result of the motion of the first user. That is, unlike the conventional system setting manner, in the embodiment of the present application, the user can set the attribute of the moving object in the interactive game by his own motion. Also, in different games, different attributes of the moving object may be set.

For example, in an interactive game of cutting fruit, a moving object, i.e., a kind of fruit, may be set by the action of the first user. In one example, the user may make the following specified actions: mouth opening, blinking, head shaking, nodding/raising, smiling, mouth skimming, mouth beeping, hand waving, finger stretching, heart by hand, hand support, OK, pistol gestures, and the above actions correspond to the following elements, respectively: watermelon, orange, pineapple, coconut, mango, banana, peach, apple, kiwi, pear, lemon, strawberry and bomb. Of course, those skilled in the art will appreciate that the user may make any expression, gesture, and gesture during the "bring-up" process, and that the above elements are determined to be generated only when the user makes a specified expression, gesture, and gesture.

And when the video is played to the user, when the user makes the specified action, the corresponding element can be presented in the played video as if the user summons the element through the specified action. In this example, the user's specified action corresponds to the type of the summoned element, and other attributes for each element, such as initial position, motion trajectory, and motion speed, may be determined in a default manner, e.g., a random assignment may be made.

For example, the appearance position of each element is that the element protrudes upwards from the bottom of the window, falls after reaching the highest point of the corresponding motion track, and the falling process has 1 physical acceleration. And the motion track can be one of 12 preset motion tracks, and the motion speed can be divided into a large gear and a medium gear. After different elements appear, the motion trail and the motion speed are randomly distributed with the same probability.

In addition, for other types of interactive games, the corresponding attributes of the moving objects in the interactive game may also be determined based on the recognition result of the first user's action. For example, in a groundmouse game, the position where a groundmouse appears in the game may be determined based on the recognition result of the action of the first user. For example, the user's gesture may be recognized as numbers 1 to 9, and each number corresponds to a position where a squirrel appears. Further, in the penalty ball game, the direction in which the ball is shot in the game may be determined based on the recognition result of the action of the first user, and other attributes such as the speed of the ball may be determined in a default manner.

Therefore, in the interactive game generation method according to the embodiment of the present application, determining the attribute of the moving object in the interactive game based on the recognition result of the action of the first user includes: determining a predetermined attribute of a moving object in an interactive game based on a recognition result of the action of the first user; and determining other attributes of the moving object except the predetermined attribute in a default manner.

In step S130, the interactive game is generated based on the attributes of the moving object. As described above, by determining the attribute of the moving object based on the recognition result of the motion of the first user, the interactive game can be generated based on the attribute.

For example, for the cut fruit game as described above, an interactive game of cut fruit can be generated by determining the type of fruit present based on the motion of the first user and assigning the motion trajectory and motion speed of the fruit in a random manner. Here, it will be understood by those skilled in the art that the interactive game includes data of various attributes such as the appearance time point, the appearance position, the motion trajectory, the motion speed, etc. of the level.

In the embodiment of the present application, in generating the interactive game, other moving objects may be included in addition to the moving object determined based on the recognition result of the motion of the first user. For example, in a game of cutting fruit, in addition to fruit or bombs that the user summons by their own actions, the system may also include eggs randomly assigned by the system. For example, the eggs may be brand brands or merchandise, etc. that are in a market cooperation, thereby giving the user playing the interactive game an additional award. Of course, there may be a case where a moving object set by another system is additionally added to the moving object determined based on the recognition result of the motion of the first user in order to improve richness and playability of the game.

Therefore, in the interactive game generation method according to the embodiment of the present application, generating the interactive game based on the attribute of the moving object includes: generating the interactive game based on the attributes of the moving object and other moving objects that are randomly generated.

And, in addition to the recognition result of the motion of the first user, if a user makes a plurality of motions, a time point of each motion of the first user may be further determined and the interactive game may be generated corresponding to the time point of each motion. Also, with the above-described cut fruit game distance, a time point of each action of the user may be determined, and different elements may be summoned according to the action of the user corresponding to the time point. Therefore, the user can more conveniently control the game progress of the generated interactive game, and the participation sense of the user is further enhanced.

That is, in the interactive game generating method according to the embodiment of the present application, further comprising: determining a point in time for each of a plurality of actions of the first user; and generating the interactive game based on the attributes of the moving object comprises: and generating the interactive game based on the time point of each action and the attribute of the moving object corresponding to each action.

After generating the interactive game, i.e. the first user "recruiting", the generated interactive game may be presented to the first user, in particular, a page may be presented through a game. FIG. 2 illustrates a schematic diagram of a game presentation page of an interactive game production method according to an embodiment of the present application.

As shown in fig. 2, the game presentation page 200 includes a thumbnail 210 of the generated interactive game, for example, a cover map of the generated interactive game, and the user can preview the interactive game by clicking on the thumbnail. Of course, it will be understood by those skilled in the art that after the interactive game is generated, the interactive game may also be automatically previewed or a progress screen, i.e., a video including an interactive game screen and a user action screen, may be played back to the user. After the user previews the generated interactive game, it may be confirmed that the generation of the interactive game is completed, and the interactive game may be saved. Thus, the game presentation page 200 also includes a save option 220 to save the generated interactive game. In addition, at least a part of the attributes of the moving object, such as attribute a, attribute B, and the like shown in fig. 2, may also be included in the game presentation page 200, and for example, an attribute may be the number of elements called.

Therefore, in the interactive game generation method according to the embodiment of the present application, after generating the interactive game based on the attribute of the moving object, the method further includes: and presenting the generated interactive game to the first user through a game presentation page.

In the interactive game generation method, the game presentation page includes at least one of the following: a thumbnail of the generated interactive game; at least a portion of the attributes of the moving object; and saving the saving option of the generated interactive game.

Further, in the interactive game generation method according to an embodiment of the present application, after generating the interactive game based on the attribute of the moving object, the method further includes: and presenting the generated interactive game in a preview mode.

As described above, in the embodiment of the present application, the generated interactive game is not completely set by the system, but there is a portion set by the user. Therefore, the users have sufficient power to share the set interactive game, and interaction among the users is established.

Therefore, in the interactive game generation method according to the embodiment of the application, in order to improve the interaction between users, it is desirable that the users can conveniently share the generated interactive game to other users, and in a colloquial way, the users call out other users to "pick up". That is, upon receiving a request from a user to shout other users "pick up," the user may be provided with a sharing guide window, for example, presented in the form of a pop-up sharing card, and the guidance sharing information may include: displaying process key data, presenting a video cover picture of a posting material, a guide case, an activity name, a two-dimensional code and the like.

Therefore, in the interactive game generation method according to the embodiment of the present application, after generating the interactive game based on the attribute of the moving object, the method further includes: assigning, by the first user, the generated interactive game to a specific user or a non-specific user. Here, the specific user refers to one or more users that the first user designates to be attended, and the unspecified user refers to a user that the first user shares on a circle of friends, a microblog or the like, but does not designate to be attended.

FIG. 3 illustrates a flow chart of an interactive game processing method according to an embodiment of the present application.

As shown in fig. 3, the interactive game processing method according to the embodiment of the present application includes: s310, acquiring the generated interactive game; s320, recognizing the action of the second user; s330, determining whether the condition corresponds to the moving object based on the recognition result of the action of the second user; and S340, obtaining the processing result of the interactive game based on the determination result.

In step S310, the generated interactive game as described above is acquired. That is, after the first user completes the "call out", the second user may perform the "call in", and thus the interactive game played by the second user is the interactive game generated by the first user as described above.

In step S320, an action of the second user is identified. In particular, depending on the interactive game, the action of the second user may also be a different action, for example, in a fruit-cutting game, the action of the second user is an arm-waving action.

In step S330, it is determined whether to correspond to the attribute of the moving object based on the recognition result of the motion of the second user. That is, it is determined whether a completion condition of the interactive game is satisfied based on the recognition result of the motion of the second user, and, in the interactive game, the completion condition is based on a condition of the moving object.

It is to be noted that, in the interactive game processing method according to the embodiment of the present application, the condition of the moving object may be different from the attribute of the moving object determined based on the recognition result of the motion of the first user in the interactive game generation method according to the embodiment of the present application described above. For example, in a fruit cutting game, whether a fruit is cut is determined by determining whether the motion trajectory of the user passes the position of the moving object, and thus, the condition of the moving object is related to the position of the moving object. As described above, the determined attribute of the moving object may be the type of the moving object when the interactive game is generated.

In step S340, a processing result of the interactive game is obtained based on the determination result. That is, if the attribute corresponding to the moving object is determined based on the determination result, it is determined that the completion condition of the interactive game is achieved, and the corresponding process is performed.

For example, in a fruit cutting game, the change of the limb joint point location characteristic of the second user is detected and judged in real time. When the motion track point location of the limb of the second user coincides with the fruit or bomb point location, for example, at least 2 point locations of the fruit or bomb coincide, it is determined as cut, otherwise, it is not cut. The cut fruit or bomb has visual effect and special effect sound, and the cut fruit or bomb has visual effect of particles splashed by fruit juice or exploded by bomb.

Moreover, the cut fruits have corresponding scoring judgment strategies which are divided into three conditions of ordinary cut, magic attraction and continuous attraction. The common cutting is 1-2 fruits in a single cutting, and the corresponding scores of the fruits are added according to the scores, for example, 1 score is obtained by cutting watermelons, oranges, pineapples, coconuts, mangoes and bananas; apple, peach, kiwi fruit, pear, lemon, strawberry are cut into 2 points. The central point of the fruit in the spirit is the spirit, 5 points are added additionally to the spirit, and the spirit is accompanied by special visual effect and special effect sound. The double move is to cut 3 and more fruits at the same time in a single time, obtaining a score twice the corresponding total score, accompanied by visual special effects and special effects.

There is a change judgment logic of the vital value for the whole of the cut fruit game. Wherein, the blood dropping corresponds to the dropping of the missed cut fruit, the life value corresponds to-1 when 1 fruit is dropped, the sound effect and the animation of-1 are accompanied, and the game is ended when the life value is 0. The blood value can be recovered by cutting the preserved egg, and 1 life value can be recovered by each menu. If the bomb is cut, the game is directly terminated no matter what the life value is, and the visual animation and the special sound are accompanied. The eggs randomly appear in the interactive game, for example, 0-3 eggs randomly appear in every 5 seconds, and the content of the specific eggs is random. The life value or bonus can be recovered after cutting the colored eggs, for example, when the life value is less than 3, 1 life value is recovered, when the life value is equal to 3, bonus in the range of 10-50 is randomly performed, and simultaneously, visual special effect and special effect sound are accompanied.

After the interactive game is finished, the result of the interactive game, i.e., the processing result as described above, may be presented to the second user in a result presentation page so that the user may know the process of playing the game by himself. And, the result presentation page may include a sharing option for the user to share the result of the interactive game.

In addition, after the user finishes the interactive game, the interactive game may be generated according to the interactive game generation method according to the embodiment of the present application. That is, after the enrollment interaction is completed, the User may initiate an enrollment request and further initiate an enrollment operation, which may form an enrollment/enrollment UGC (User generated content) interactive closed loop and facilitate the User to continuously pull new users to participate in the interactive game.

Therefore, in the interactive game processing method according to the embodiment of the present application, the method further includes: after the interactive game is finished, presenting the processing result to the second user in a result presentation page, wherein the result presentation page further comprises a generation option for the second user to generate the interactive game.

Application example

Fig. 4 is a schematic diagram illustrating an application example of an interactive game generation method and an interactive game processing method according to an embodiment of the present application.

As shown in fig. 4, a user first accesses an activity page of an interactive game on a client, for example, the activity page is composed of a video of an invitation material and a portal for initiating an invitation. The video of the invitation material can be initially produced by a game operator or other operation persons, and is supplied by UGC of users participating in interaction after the activity is started.

And then, a user initiates a call-out request on the activity page, the activity page enters a call-out initial interface, the client of the user starts a recording view-finding window, and meanwhile, a popup window prompts a description to inform the user that the user can make any expression or gesture to try to call out different fruits or bombs. The user can click on start, the client responds, face recognition is started, and the face recognition of the user is guided through the face display prompt box. When the face recognition is guided, a 3-second countdown is automatically started, and during the countdown, the face recognition is not performed (or the face recognition is performed but no specific instruction judgment is made). After the countdown is finished, motion recognition, such as face recognition, gesture recognition and posture recognition, is started, and expressions, gestures and limb motions are detected in real time. Meanwhile, real-time video recording is started, and the video recording content comprises: the content of the real-time view-finding picture, the interactive game picture, the background music, the special visual effect, the special effect sound and the like.

During the recording process, the user can make any action, and when making a specified action, the user calls a corresponding element, such as a fruit or a bomb, and randomly allocates a motion track and a motion speed. The client of the user records the interaction elements called out in the calling process and the corresponding parameter data such as the appearance time point, the position, the motion trail, the speed and the like.

The recording time length can be preset, for example, 30 seconds in total, and the recording is automatically ended after the time is up and cannot be ended in advance. After the enrollment is finished, the client plays back the process picture, and the user can replace the background music. Thereafter, the user may initiate a request to complete the move and yell the buddy pick up.

After the corresponding request is sent out, the client synthesizes the playback video, and uploads the recorded picture data, the interactive process recording parameter data and the like to the server material library. After the upload is completed, for example, the client may actively return to the notification client, and the client pops up the sharing guidance popup window after receiving the message. This share guide popup shows for example with sharing the card form, and the guide information of sharing includes: the method comprises the steps of process key data display, video cover pictures of posting materials, irritant guiding documents, activity names and two-dimensional codes, sharing channels and the like.

Other users select a particular posting material on the activity page to open challenges based on sharing or accessing the activity page from other channels. For example, a page of specific material may be entered (click on a material card) or a challenge request may be initiated directly (click on a challenge button). The specific material page may include a material cover map, a corresponding enrollment interaction result ranking, an enrollment portal, and the like. The clicked material cover picture does not support the video for calling and previewing in advance, and real-time result ranking data can be obtained through a request.

The user initiates an enrollment request on an active page or a specific material page, the server side receives the request and then issues video, cover page and interactive parameter data of corresponding materials, and the client side synchronously loads an enrollment preparation page and downloads the material video. The client displays a preparation guide interface, loads a material cover picture, for example, first expands and fully covers the whole screen, and then contracts in a dynamic mode to be displayed in a small window at the lower left corner of the picture-in-picture (here, only the cover is displayed, and preview playing is not started).

And when the material cover is displayed on the small window, the client displays guidance of whole body limb identification, starts real-time limb joint point scanning monitoring, and when the limb joint point identification is obtained and the feature vector extraction is completed, the identification is successful. After the limb identification is successful, a specific effect can be displayed, for example, a visual effect with an electric knife appears on both hands and feet of the user, and meanwhile, the 3-second countdown is automatically started. During the countdown, limb recognition is not performed (recognition may be performed, but no specific instruction determination is made). After the countdown is finished, the client displays the start page for accepting and calling, for example, the start page can be in a picture-in-picture mode, the material video of a small picture-in-picture window starts automatic playing, and meanwhile, the main window starts the front camera to shoot and view and record real-time video. Besides the picture recorded by the front camera, the video recording content also comprises a picture-in-picture small window material video and a game picture in the interactive process.

Here, the composition of the call start and process pictures, in addition to the material small video at the lower left corner, the live-shooting view-finding picture of the main window, may also include top interactive information, interactive animation, etc. For example, the top interaction information includes: countdown when the game is ended, interactive battle achievement, life value and the like. The countdown is consistent with the material duration and is 30 seconds, the interactive battle achievement is 0 initially, the interactive battle achievement is updated in real time along with the interaction, 3 default life values are obtained, and the minimum life value is reduced to 0.

And then, animation of the interactive game appears in the main window, and comprises the interactively appeared fruit elements, the fruit cutting animation, the real-time score and the corresponding visual special effect. After the interactive game starts, the client renders interactive animation, and simultaneously detects and judges the change of the point position characteristics of the limb joint in real time. In the process of interactive game, elements such as fruits are rendered and appear based on the parameters of the posting material, and the posting user carries out interaction of cutting fruits through the motion of both hands and both feet. Here, the client records real-time interactive data of the enrollment process, including point values, changes in vital value, bombs, paintball data, and the like.

And after the call interaction is finished, automatically finishing the material playing and the picture recording, wherein the material playing and the picture recording cannot be finished in advance. The client side synthesizes the enrollment playback video at the moment, and uploads the video and the enrollment interaction data to the server side. For example, the playback video is in picture-in-picture form, wherein the main window is the viewing content of the front camera which is recorded in real time, and the small window is the material video. Scores, vital values, and real-time interactive elements, interactive results, visual effects, etc. are also rendered in the main window interface based on the timeline. The special effect background sound is synthesized with the video content based on the time point. And after the synthesis and uploading of the playback video are finished, the client receives the message and opens the call ending page.

The pick end page may include: and (4) guiding sharing, returning to an activity page to select an entrance of other materials for receiving and calling, issuing a calling video entrance by the UGC, watching a playback video and the like. Wherein, the information of guide sharing includes: showing results data of the call, title/title motivation, guided case, sharing button. Wherein the enrollment result data includes an enrollment count, a score, a persistence time, a final life value, and a score ranking. The share button may be, for example, "pull friends to pick up" to further stimulate the user's voluntary desire to share. The user may click on the share button or press the page at the end page length (e.g., as a hide function), generate a share card at the client, and actively expose the share channel button.

Here, the sharing card may show the activity introduction case and the participation two-dimensional code in addition to the end page information. And the sharing channel comprises the steps of storing as a poster, pulling up a WeChat friend circle for sharing, sharing to WeChat friends, sharing to Sina microblog and the like. In addition, the user can click the "try me bid" function button, and will return to the active page, and can select other materials to continue to meet the challenge. The user can click the function area of 'I also need to call', and the client starts the calling process and enters an initial prompting explanation interface for calling.

In addition, the user can click and play back the video cover picture on the end page, watch the playback of the interactive process, and the content is the synthesized video. The user can click the save button and keep the locally generated video to the path of the mobile phone album.

Therefore, the characteristics of the internet UGC can be exerted, the setting is carried out on the setting of the game link through the recognition of actions such as expressions, gestures and the like of the user, and the social relationship is utilized to propagate and drive friends to participate in the interaction of the set game. In addition, the attraction of the educational spelling game is kept, and meanwhile, the user is called to need four limbs to move when interacting, and the mental feeling of body building and transmission to the user is achieved.

Exemplary devices

Fig. 5 illustrates a block diagram of an interactive game generation apparatus according to an embodiment of the present application.

As shown in fig. 5, the interactive game generation apparatus 400 according to the embodiment of the present application includes: a motion recognition unit 410 for recognizing a motion of a first user, the motion of the first user comprising at least one of an expression, a gesture, and a posture of the first user; an attribute determining unit 420 for determining an attribute of a moving object in an interactive game based on a recognition result of the motion of the first user by the motion recognition unit 410; and a game generating unit 430, configured to generate the interactive game based on the attribute of the moving object determined by the attribute determining unit 420.

In an example, in the above-mentioned interactive game generating apparatus 400, the action recognition unit 410 is configured to: recording a video of the action performed by the first user; in the video recording process, identifying the action of the first user; and playing the video containing the first user's action.

In an example, in the above-mentioned interactive game generating apparatus 400, the attribute determining unit 420 is configured to: determining a predetermined attribute of a moving object in an interactive game based on a recognition result of the motion of the first user by the motion recognition unit 410; and determining other attributes of the moving object except the predetermined attribute in a default manner.

In an example, in the above-mentioned interactive game generating apparatus 400, the game generating unit 430 is configured to: the interactive game is generated based on the attributes of the moving object determined by the attribute determining unit 420 and other moving objects randomly generated.

In one example, in the above interactive game generating apparatus 400, further comprising: a time determination unit for determining a time point of each of a plurality of actions of the first user; and the game generation unit 430 is configured to: and generating the interactive game based on the time point of each action and the attribute of the moving object corresponding to each action.

In an example, in the above-mentioned interactive game generating apparatus 400, the action recognition unit 410 is configured to: performing action pre-recognition on the first user; and in response to the action pre-recognition being successful, beginning to recognize an action of the first user.

In one example, in the above interactive game generating apparatus 400, further comprising: a game presenting unit, configured to present the generated interactive game to the first user through a game presentation page after the game generating unit 430 generates the interactive game based on the attribute of the moving object.

In one example, in the above-mentioned interactive game generating apparatus 400, the game presentation page includes at least one of: a thumbnail of the generated interactive game; at least a portion of the attributes of the moving object; and saving the saving option of the generated interactive game.

In one example, in the above interactive game generating apparatus 400, further comprising: a game preview unit, configured to generate the interactive game based on the attribute of the moving object in the game generation unit 430, and present the generated interactive game in a preview manner.

In one example, in the above interactive game generating apparatus 400, further comprising: a game specifying unit configured to, after the game generating unit 430 generates the interactive game based on the attribute of the moving object, specify the generated interactive game to a specific user or a non-specific user by the first user.

FIG. 6 illustrates a block diagram of an interactive game processing apparatus according to an embodiment of the present application.

As shown in fig. 6, the interactive game processing apparatus 500 according to the embodiment of the present application includes: an obtaining unit 510, configured to obtain the interactive game generated by the interactive game generating apparatus 400 as described above; an identifying unit 520 for identifying an action of the second user; a determination unit 530 for determining whether to correspond to the condition of the moving object based on the recognition result of the motion of the second user recognized by the recognition unit 520; and a processing unit 540 for obtaining a processing result of the interactive game based on the determination result of the determining unit 530.

In one example, in the above-mentioned interactive game processing device 500, further comprising: and the presentation unit is used for presenting the processing result to the second user by a result presentation page after the interactive game is finished, and the result presentation page further comprises a generation option for the second user to generate the interactive game.

Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the interactive game generating apparatus 400 and the interactive game processing apparatus 500 described above have been described in detail in the description of the interactive game generating method and the interactive game processing method with reference to fig. 1 to 3, and thus, a repetitive description thereof will be omitted.

As described above, the interactive game generation apparatus 400 and the interactive game processing apparatus 500 according to the embodiment of the present application may be implemented in various terminal devices, such as a smart phone of a user. In one example, the interactive game generating apparatus 400 and the interactive game processing apparatus 500 according to the embodiment of the present application may be integrated into the terminal device as one software module and/or one hardware module. For example, the interactive game generating apparatus 400 and the interactive game processing apparatus 500 may be a software module in an operating system of the terminal device, or may be an application program developed for the terminal device; of course, the interactive game generating apparatus 400 and the interactive game processing apparatus 500 may also be one of many hardware modules of the terminal device.

Alternatively, in another example, the interactive game generating apparatus 400 and the interactive game processing apparatus 500 and the terminal device may be separate devices, and the interactive game generating apparatus 400 and the interactive game processing apparatus 500 may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information according to an agreed data format.

Exemplary electronic device

Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 7.

FIG. 7 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.

As shown in fig. 7, the electronic device 10 includes one or more processors 11 and memory 12.

The processor 13 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.

Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 11 to implement the interactive game generation method, the interactive game processing method, and/or other desired functions of the various embodiments of the present application described above. Various contents such as attribute values of moving objects in an interactive game may also be stored in the computer-readable storage medium.

In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).

The input device 13 may include, for example, a keyboard, a mouse, and the like.

The output device 14 may output various information including the generated interactive game and the processing result of the interactive game to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.

Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 7, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.

Exemplary computer program product and computer-readable storage Medium

In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the interactive game generation method and the interactive game processing method according to various embodiments of the present application described in the above-mentioned "exemplary methods" section of this specification.

The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the first user computing device, partly on the first user device, as a stand-alone software package, partly on the first user computing device and partly on a remote computing device, or entirely on the remote computing device or server.

Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the interactive game generation method and the interactive game processing method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.

The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

Exemplary cloud architecture

It is to be noted that the video processing method according to the embodiment of the present application may adopt a system architecture based on a cloud computing environment, which is referred to as a cloud architecture for short. Those skilled in the art will appreciate that cloud computing is a service provisioning model that enables on-demand network access to a shared pool of resources made up of configurable computing resources (e.g., networks, network bandwidth, servers, processors, memory, storage media, applications, virtual machines, and services). The shared resource pool can be configured and released quickly with only minor administrative effort or interaction with the service provider.

Fig. 8 illustrates an exemplary cloud architecture according to an embodiment of the present application. As shown in fig. 8, an exemplary cloud architecture 20 includes a series of cloud computing nodes 21. Through these cloud computing nodes 21, local computing devices, such as an in-vehicle computer 22A, a smartphone 22B, a personal digital assistant 22C, a tablet computer 22D, and the like, can implement internet communication. Cloud computing nodes 21 may be in communication with each other and may be grouped, either virtually or physically, to form a series of networks of nodes, such as private, public, community, or hybrid clouds, etc., in such a way as to provide cloud users with cloud services that do not require resource maintenance on the local computing devices, such as infrastructure, software programs or platforms, etc. Those skilled in the art will appreciate that the computing device illustrated in fig. 6 is merely an example, and that a cloud computing environment may be interconnected with any other computing device, directly or indirectly, via a network, and that this application is not intended to be limiting in any way.

Fig. 9 illustrates a schematic diagram of abstraction functional layers of a cloud architecture according to an embodiment of the present application.

As shown in FIG. 9, a set of abstraction functional layers provided by cloud architecture 20 includes hardware and software layers, a virtualization layer, a management layer, and a working layer. Those skilled in the art will appreciate that the components, layers, and functions illustrated in fig. 7 are merely examples to illustrate features of cloud architecture 20 and are not intended to limit the present application in any way.

The hardware and software layers include a range of hardware and software, where the hardware includes, but is not limited to, hosts, RISC (Reduced Instruction Set Computer) architecture servers, blade servers, storage devices, networks and network components, and the like. In addition, the software includes web application server and database software, etc.

The virtual layer includes a series of virtual entities including, but not limited to, virtual servers, virtual storage spaces, virtual networks, virtual private networks, virtual applications and operating systems and virtual clients, etc.

The management layer is used to implement the functions described below. First, a resource provisioning function that is capable of providing dynamic procurement of computing and other resources needed for performing tasks within the cloud architecture; secondly, a metering and pricing function, which can track the use cost of the resources in the cloud architecture and charge or price the resource consumption; thirdly, a security protection function, which can perform identity verification on cloud users and tasks and protect data and other resources; fourth, a user portal function capable of providing access channels to the cloud infrastructure for cloud users and system administrators; fifthly, a service management function capable of allocating and managing cloud computing resources to meet the requirements of the required service; sixth, a Service Level Agreement planning and enforcement function, which can pre-arrange and purchase required cloud computing resources according to SLA (Service Level Agreement).

The working layer provides functional examples that can be implemented by the cloud architecture, for example, the video processing method according to the embodiment of the present application as described above.

The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.

The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".

It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.

The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:虚拟角色控制方法、装置、计算机设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类