Man-machine interaction method, device, equipment and storage medium

文档序号:557456 发布日期:2021-05-18 浏览:6次 中文

阅读说明:本技术 一种人机交互的方法、装置、设备及存储介质 (Man-machine interaction method, device, equipment and storage medium ) 是由 肖乐天 符海清 许秋子 于 2021-02-23 设计创作,主要内容包括:本发明提供了一种人机交互的方法、装置、设备及存储介质,该方法包括:接收用户指令,所述用户指令用于指示虚拟现实VR游戏中人物的射击动作;根据所述用户指令,确定所述VR游戏中人物的射击动作的射线起始点与起始方向;根据所述人物的射击动作的射线起始点与起始方向,确定所述人物的射击动作的击中目标;控制所述击中目标按照预设方式振动。在该方法中,可以通过用户指令,确定用户射击动作的击中目标,从而控制目标振动,可以给用户一个应答,起到反馈用户的目的,能够提高用户体验,增强用户参与感。(The invention provides a method, a device, equipment and a storage medium for man-machine interaction, wherein the method comprises the following steps: receiving a user instruction, wherein the user instruction is used for indicating a shooting action of a character in a Virtual Reality (VR) game; determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction; determining a hitting target of the shooting action of the figure according to the ray starting point and the starting direction of the shooting action of the figure; and controlling the hit target to vibrate according to a preset mode. In the method, the hitting target of the shooting action of the user can be determined through the user instruction, so that the target vibration is controlled, a response can be given to the user, the purpose of feeding back the user is achieved, the user experience can be improved, and the user participation sense is enhanced.)

1. A method of human-computer interaction, the method comprising:

receiving a user instruction, wherein the user instruction is used for indicating a shooting action of a character in a Virtual Reality (VR) game;

determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction;

determining a hitting target of the shooting action of the figure according to the ray starting point and the starting direction of the shooting action of the figure;

and controlling the hit target to vibrate according to a preset mode.

2. The method of human-computer interaction of claim 1, further comprising:

determining the position of the hit target, wherein the position of the hit target is relative to the character;

the determining of the hit target of the shooting action of the character according to the ray starting point and the starting direction of the shooting action of the character comprises:

determining whether the ray is projected to the surface of the hit target when the ray moves to the position of the hit target according to the starting point and the starting direction of the ray of the shooting action of the character;

and determining the hit target of the shooting action of the character when the ray moves to the position of the hit target and is projected to the surface of the hit target.

3. The method according to claim 1 or 2, wherein after determining the hit target of the shooting action of the person based on the starting point and the starting direction of the ray of the shooting action of the person, the method further comprises:

determining the surface attribute corresponding to the hit target, wherein the hit target has the surface attribute;

the controlling the hitting target to vibrate according to a preset mode comprises the following steps:

controlling the hitting target to vibrate according to the surface attribute of the hitting target.

4. The method of claim 3, wherein the controlling the hitting target to vibrate in a preset manner comprises:

and controlling the hit target to vibrate according to a preset amplitude and a preset time length.

5. The method of any of claims 1, 2, or 4, further comprising:

and displaying prompt information, wherein the prompt information comprises that the hit target is successfully hit.

6. A human-computer interaction device, characterized in that the human-computer interaction device comprises:

the virtual reality VR game system comprises a receiving module, a shooting module and a control module, wherein the receiving module is used for receiving a user instruction, and the user instruction is used for indicating a shooting action of a character in the virtual reality VR game;

the first processing module is used for determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction;

the second processing module is used for determining a hitting target of the shooting action of the character according to the ray starting point and the starting direction of the shooting action of the character;

and the control module is used for controlling the hitting target to vibrate according to a preset mode.

7. The human interaction device of claim 6, wherein the human interaction device further comprises:

the third processing module is used for determining the position of the hit target, wherein the position of the hit target is relative to the position of the person;

the second processing module is specifically configured to determine, according to a ray starting point and a starting direction of a shooting action of the person, whether the ray is projected onto the surface of the hit target when the ray moves to the position of the hit target;

the second processing module is specifically configured to determine a hit target of the shooting action of the person when the ray moves to the position of the hit target and is projected onto the surface of the hit target.

8. The human interaction device of claim 6 or 7, wherein the human interaction device further comprises:

the fourth processing module is used for determining the surface attribute corresponding to the hit target, wherein the hit target has the surface attribute;

the control module is specifically configured to control the hitting target to vibrate according to the surface attribute of the hitting target.

9. A human-computer interaction device, characterized in that the human-computer interaction device comprises: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;

the at least one processor invokes the instructions in the memory to cause the human interaction device to perform the method of human interaction of any of claims 1-5.

10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of human-computer interaction according to any one of claims 1 to 5.

Technical Field

The present invention relates to the field of internet technologies, and in particular, to a method, an apparatus, a device, and a storage medium for human-computer interaction.

Background

With the development of internet technology, Virtual Reality (VR) technology has also been rapidly developed, VR technology is applied to the field of games, virtual game characters can be displayed in real environments through terminals such as mobile phones and game machines through virtual reality technology, VR games realize optimized combination of games and VR technology from three aspects of position service, image recognition and data processing, and significant breakthrough of VR games in terms of playing methods and forms brings brand new game experience to players.

In the current VR games on the market, game roles and behaviors are all set by the games, and the interactivity with game users is poor. For the actions and behaviors of the user, the existing VR game has difficulty in forming user responses matching with the action and behavior of the user, so that the user experience is poor.

Disclosure of Invention

The invention mainly aims to solve the technical problem of poor user experience in VR games.

In view of the above, a first aspect of the present invention provides a method for human-computer interaction, where the method includes: receiving a user instruction, wherein the user instruction is used for indicating a shooting action of a character in a Virtual Reality (VR) game; determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction; determining a hitting target of the shooting action of the figure according to the ray starting point and the starting direction of the shooting action of the figure; and controlling the hit target to vibrate according to a preset mode. In the method, the hitting target of the shooting action of the user can be determined through the user instruction, so that the target vibration is controlled, a response can be given to the user, the purpose of feeding back the user is achieved, the user experience can be improved, and the user participation sense is enhanced.

Optionally, with reference to the first aspect, the method further includes: determining the position of the hit target, wherein the position of the hit target is relative to the character; the determining of the hit target of the shooting action of the character according to the ray starting point and the starting direction of the shooting action of the character comprises: determining whether the ray is projected to the surface of the hit target when the ray moves to the position of the hit target according to the starting point and the starting direction of the ray of the shooting action of the character; when the ray moves to the position of the hit target and is projected to the surface of the hit target, the hit target of the shooting action of the character is determined.

Optionally, with reference to the first aspect, after determining the hit target of the shooting action of the person according to the ray starting point and the starting direction of the shooting action of the person, the method further includes: determining the surface attribute corresponding to the hit target, wherein the hit target has the surface attribute; the controlling the hitting target to vibrate according to a preset mode comprises the following steps: controlling the hitting target to vibrate according to the surface attribute of the hitting target.

Optionally, with reference to the first aspect, the controlling the hitting target to vibrate in a preset manner includes: and controlling the hit target to vibrate according to a preset amplitude and a preset time length.

Optionally, with reference to the first aspect, the method further includes: and displaying prompt information, wherein the prompt information comprises that the hit target is successfully hit.

A second aspect of the present invention provides a human-computer interaction device, including: the virtual reality VR game system comprises a receiving module, a shooting module and a control module, wherein the receiving module is used for receiving a user instruction, and the user instruction is used for indicating a shooting action of a character in the virtual reality VR game; the first processing module is used for determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction; the second processing module is used for determining a hitting target of the shooting action of the character according to the ray starting point and the starting direction of the shooting action of the character; and the control module is used for controlling the hitting target to vibrate according to a preset mode.

Optionally, in combination with the second aspect, the human-computer interaction device further includes: the third processing module is used for determining the position of the hit target, wherein the position of the hit target is relative to the position of the person; the second processing module is specifically configured to determine, according to a ray starting point and a starting direction of a shooting action of the person, whether the ray is projected onto the surface of the hit target when the ray moves to the position of the hit target; the second processing module is specifically configured to determine a hit target of the shooting action of the person when the ray is projected onto the surface of the hit target when moving to the position of the hit target.

Optionally, in combination with the second aspect, the human-computer interaction device further includes: the fourth processing module is used for determining the surface attribute corresponding to the hit target, wherein the hit target has the surface attribute; the control module is specifically configured to control the hitting target to vibrate according to the surface attribute of the hitting target.

Optionally, with reference to the second aspect, the control module is specifically configured to control the hitting target to vibrate according to a preset amplitude and a preset duration.

Optionally, in combination with the second aspect, the human-computer interaction device further includes: and the display module is used for displaying prompt information, and the prompt information comprises that the hit target is successfully hit.

A third aspect of the present invention provides a human-computer interaction device, including: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor calls the instructions in the memory to enable the human-computer interaction device to execute the human-computer interaction method according to the first aspect and any one of the possible implementation manners of the invention.

A fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the above-mentioned method for human-computer interaction.

The invention provides a method, a device, equipment and a storage medium for man-machine interaction, wherein the method comprises the following steps: receiving a user instruction, wherein the user instruction is used for indicating a shooting action of a character in a Virtual Reality (VR) game; determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction; determining a hitting target of the shooting action of the figure according to the ray starting point and the starting direction of the shooting action of the figure; and controlling the hit target to vibrate according to a preset mode. In the method, the hitting target of the shooting action of the user can be determined through the user instruction, so that the target vibration is controlled, a response can be given to the user, the purpose of feeding back the user is achieved, the user experience can be improved, and the user participation sense is enhanced.

Drawings

FIG. 1 is a diagram of a first embodiment of a method for human-computer interaction according to an embodiment of the present invention;

FIG. 2 is a schematic view of a power vest model in an embodiment of the invention;

FIG. 3 is a diagram of a second embodiment of a method for human-computer interaction according to an embodiment of the invention;

FIG. 4 is a schematic diagram of a first embodiment of a human-computer interaction device in an embodiment of the invention;

FIG. 5 is a diagram of a human-computer interaction device according to a second embodiment of the present invention;

FIG. 6 is a diagram of an embodiment of a human-computer interaction device in an embodiment of the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The term "and/or" appearing in the present application may be an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this application generally indicates that the former and latter related objects are in an "or" relationship.

The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.

The invention provides a method, a device, equipment and a storage medium for man-machine interaction, wherein the method comprises the following steps: receiving a user instruction, wherein the user instruction is used for indicating a shooting action of a character in a Virtual Reality (VR) game; determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction; determining a hitting target of the shooting action of the figure according to the ray starting point and the starting direction of the shooting action of the figure; and controlling the hit target to vibrate according to a preset mode. In the method, the hitting target of the shooting action of the user can be determined through the user instruction, so that the target vibration is controlled, a response can be given to the user, the purpose of feeding back the user is achieved, the user experience can be improved, and the user participation sense is enhanced.

For convenience of understanding, a specific flow of the embodiment of the present invention is described below, and referring to fig. 1, a first embodiment of a human-computer interaction method in the embodiment of the present invention includes:

101. a user instruction is received.

And receiving a user instruction, specifically, receiving the user instruction through the terminal equipment.

In this embodiment, the terminal device needs to first construct a vest program. When the UE4(Unreal Engine 4) application starts, the native vest program is connected. When the connection is successful, the user can play the VR game through the terminal equipment.

Specifically, a user can send a UDP message to the vest program through a User Datagram Protocol (UDP) through the terminal device, and the vest program drives hardware of the vest, so that vibration of each module on the vest can be realized. The vest can be somatosensory feedback clothes, and can use vibration as hit feedback in VR games.

102. And determining the ray starting point and the starting direction of the shooting action of the character in the VR game according to the user instruction.

And determining the ray starting point and the starting direction of the designed action of the character in the VR game according to the user instruction. For the UE4, the imported skeletal model automatically creates a physics (physical object). When a character in a VR game performs a shooting action, a ray point and a starting direction of the shooting action of the character in the VR game can be determined, and the starting point can be determined according to the position of the character. Further, for example, if the prop used by the character is a gun, the muzzle of the gun may be used as the starting point of the ray. The starting direction may also be determined by the direction in which the prop is held by the person, for example, the direction in which the muzzle of a pistol is pointed.

103. And determining the hit target of the shooting action of the character according to the ray starting point and the starting direction of the shooting action of the character.

And determining the hit target of the shooting action of the character according to the ray starting point and the starting direction of the shooting action of the character. For example, the hit target may be a starting point in a direction of the starting point, and may be on an extension of the starting direction. If the hit object is on the extension line of the initial direction, it can be determined that the hit object will be hit by the ray.

104. And controlling the hit target to vibrate according to a preset mode.

When it is determined that the hit target is hit by the ray, the hit target may be controlled to vibrate in a preset manner. The mode of the radiation oscillation may be predetermined.

The hit target may be a physical surface of an object of a certain type specified by the material. Referring to fig. 2, fig. 2 is a schematic view of a power vest model provided by the present invention. In fig. 2, the powered vest may be divided into modules, such as arms, head, legs, feet, etc., each of which is comprised of a separate crash box. When the ray hits a part in the power waistcoat, the part correspondingly vibrates to interact with the user. For example, if the head is hit by a ray, the vest power unit of the head can be driven by the vest program to generate vibration.

Specifically, the vibration mode may be set in advance. Each module in the power waistcoat has corresponding motor driving vibration, each motor has an independent number (num), and the corresponding motor number of the module can be controlled so as to determine which module vibrates. The vibration duration of the module can also be preset, and the vibration duration of each motor is controlled through a field LimeTime in the vest program. Similarly, the amplitude of the motor vibration may also be preset by the vest program.

After or during the vibration, prompt information can be displayed through the terminal device to prompt that the user successfully hits the hit target. For example, when the head of the power vest is hit by a ray, information that the head is hit may be displayed.

Referring to fig. 3, a second embodiment of a human-computer interaction method according to the present invention includes:

201. a user instruction is received.

And receiving a user instruction, specifically, receiving the user instruction through the terminal equipment.

In this embodiment, the terminal device needs to first construct a vest program. When the UE4 application is launched, the native vest program is connected. When the connection is successful, the user can play the VR game through the terminal equipment.

Specifically, a user can send a UDP message to the vest program through the terminal device, and the vest program drives hardware of the vest, so that vibration of each module on the vest can be realized. The vest can be somatosensory feedback clothes, and can use vibration as hit feedback in VR games.

202. And determining the ray starting point and the starting direction of the shooting action of the character in the VR game according to the user instruction.

And determining the ray starting point and the starting direction of the designed action of the character in the VR game according to the user instruction. For the UE4, the imported skeletal model automatically creates a physics (physical object). When a character in the VR game performs a shooting action, a ray point and a starting direction of the shooting action of the character in the VR game can be determined, the starting point can be determined according to the position of the character, and further, for example, if a prop used by the character is a gun, a muzzle of the gun can be used as the starting point of the ray. The starting direction may also be determined by the direction in which the prop is held by the person, for example, the direction in which the muzzle of a pistol is pointed.

203. The location of the hit to the target is determined.

The position of the hit target, which may be a relative position with respect to the person, is determined. For example, the coordinates may be a two-dimensional coordinate system with the person itself as the origin of the reference system. Specifically, when the character itself is the reference in the VR game, the position of the enemy, which may be the target of the hit, can be determined. Further, the location of each module of the enemy may be determined. Such as the position of the head and the position of the legs, so that the enemy parts can vibrate correspondingly when being hit subsequently.

204. And determining whether the ray is projected to the surface of the hit target when the ray moves to the position of the hit target according to the ray starting point and the starting direction of the shooting action of the character.

And determining the hit target of the shooting action of the character according to the ray starting point and the starting direction of the shooting action of the character. For example, the hit target may be a starting point in a direction of the starting point, and may be on an extension of the starting direction. If the hit object is on the extension line of the initial direction, it can be determined that the hit object will be hit by the ray.

And determining whether the ray is projected to the surface of the hit target when the ray moves to the position of the hit target according to the ray starting point and the starting direction of the shooting action of the character. In one implementation, the movement of the ray may be understood as a movement from the muzzle of the character in the direction the muzzle points. When the ray moves to the position of the hit target, whether the ray is projected to the surface of the hit target. If the ray is projected on the surface of the hit target, it can be determined that the hit target is hit by the ray of the person. On the contrary, if the ray moves to the hit target and is not projected onto the surface of the hit target, it may be determined that the hit target is not hit by the ray.

205. When the ray moves to the position of hitting the target, it is projected to the surface of the hitting target, and the hitting target of the shooting action of the character is determined.

When the ray moves to the position of hitting the target, it is projected to the surface of the hitting target, and the hitting target of the shooting action of the character is determined. That is, when the ray moves to the position of the hit target, it can be determined that the shooting motion of the person hits the hit target if it is projected to the surface of the hit target.

206. And determining the surface attribute corresponding to the hit target.

And determining the surface attribute corresponding to the hit target. It should be noted that each hit target has uniquely defined surface properties. The surface property may comprise the physical material of the hit target. The physical material of the hit object can be preset, and the physical material of the hit object can be bound with the vibration mode. For example, the head of the power waistcoat may be made of steel, and the vibration duration may be set to 2 seconds when the steel is hit. The amplitude of the vibration, etc. may also be controlled, without limitation.

Thus, by determining the surface property corresponding to the hitting target, the vibration mode corresponding to the hitting target can be determined.

207. And controlling the hitting target to vibrate according to the surface attribute of the hitting target.

And determining the vibration mode of the hit target according to the obtained surface attribute of the hit target. And further controlling the hit object to vibrate according to a set vibration mode.

The invention provides a man-machine interaction method, which comprises the following steps: receiving a user instruction, wherein the user instruction is used for indicating a shooting action of a character in a Virtual Reality (VR) game; determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction; determining a hitting target of the shooting action of the figure according to the ray starting point and the starting direction of the shooting action of the figure; and controlling the hit target to vibrate according to a preset mode. In the method, the hitting target of the shooting action of the user can be determined through the user instruction, so that the target vibration is controlled, a response can be given to the user, the purpose of feeding back the user is achieved, the user experience can be improved, and the user participation sense is enhanced.

With reference to fig. 4, the human-computer interaction device 30 in the embodiment of the present invention is described in the above, and includes:

a receiving module 301, configured to receive a user instruction, where the user instruction is used to instruct a shooting action of a character in a virtual reality VR game.

The first processing module 302 is configured to determine, according to the user instruction, a ray starting point and a starting direction of a shooting action of a character in the VR game.

The second processing module 303 is configured to determine a hit target of the shooting action of the person according to the ray starting point and the starting direction of the shooting action of the person.

And the control module 304 is used for controlling the hitting target to vibrate according to a preset mode.

The invention provides a man-machine interaction device. The man-machine interaction device comprises: the virtual reality VR game system comprises a receiving module, a shooting module and a control module, wherein the receiving module is used for receiving a user instruction, and the user instruction is used for indicating a shooting action of a character in the virtual reality VR game; the first processing module is used for determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction; the second processing module is used for determining a hitting target of the shooting action of the character according to the ray starting point and the starting direction of the shooting action of the character; and the control module is used for controlling the hitting target to vibrate according to a preset mode. The man-machine interaction device can determine the target hit by the shooting action of the user by receiving the user instruction, so that the target vibration is controlled, a response can be given to the user, the purpose of feeding back the user is achieved, the user experience can be improved, and the user participation sense is enhanced.

Fig. 5 provides a schematic diagram of another embodiment of a human-computer interaction device, and referring to fig. 5, the human-computer interaction device 40 includes:

a receiving module 401, configured to receive a user instruction, where the user instruction is used to instruct a shooting action of a character in a virtual reality VR game;

a first processing module 402, configured to determine, according to the user instruction, a ray starting point and a starting direction of a shooting action of a character in the VR game;

a second processing module 403, configured to determine a hit target of the shooting action of the person according to a ray starting point and a starting direction of the shooting action of the person;

and the control module 406 is configured to control the hitting target to vibrate in a preset manner.

The human-computer interaction device 40 further comprises:

a third processing module 404, configured to determine a position of the hit target, where the position of the hit target is a relative position with respect to the person;

the second processing module 403 is specifically configured to determine, according to a starting point and a starting direction of a ray of a shooting motion of the person, whether the ray is projected onto the surface of the hit target when the ray moves to the position of the hit target;

the second processing module 403 is specifically configured to determine a hit target of the shooting action of the person when the ray moves to the position of the hit target and is projected onto the surface of the hit target.

The human-computer interaction device 40 further comprises:

a fourth processing module 405, configured to determine a surface attribute corresponding to the hit target, where the hit target has the surface attribute;

the control module 406 is specifically configured to control the hitting target to vibrate according to the surface property of the hitting target.

The control module 406 is specifically configured to control the hit target to vibrate according to a preset amplitude and a preset duration.

The human-computer interaction device 40 further comprises: the display module 407 is configured to display a prompt message, where the prompt message includes that the hit target has been successfully hit.

The invention provides a man-machine interaction device, which comprises: the virtual reality VR game system comprises a receiving module, a shooting module and a control module, wherein the receiving module is used for receiving a user instruction, and the user instruction is used for indicating a shooting action of a character in the virtual reality VR game; the first processing module is used for determining a ray starting point and a starting direction of a shooting action of a character in the VR game according to the user instruction; the second processing module is used for determining a hitting target of the shooting action of the character according to the ray starting point and the starting direction of the shooting action of the character; and the control module is used for controlling the hitting target to vibrate according to a preset mode. The man-machine interaction device can determine the target hit by the shooting action of the user through the user instruction, so that the vibration of the target is controlled, a response can be given to the user, the purpose of feeding back the user is achieved, the user experience can be improved, and the participation sense of the user is enhanced.

The man-machine interaction device in the embodiment of the present invention is described in detail in the above fig. 4 and fig. 5 from the perspective of the modular functional entity, and the man-machine interaction device in the embodiment of the present invention is described in detail in the following from the perspective of hardware processing.

Fig. 6 is a schematic structural diagram of a human-computer interaction device according to an embodiment of the present invention, where the human-computer interaction device 500 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) storing applications 533 or data 532. Memory 520 and storage media 530 may be, among other things, transient or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations for the human-computer interaction device 500. Further, the processor 510 may be configured to communicate with the storage medium 530, and execute a series of instruction operations in the storage medium 530 on the human-computer interaction device 500.

The human-computer interaction device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input-output interfaces 560, and/or one or more operating systems 531, such as Wimdows Server, Nmc OS X, Umix, Limux, FreeBSD, and the like. Those skilled in the art will appreciate that the human interaction device configuration shown in FIG. 6 does not constitute a limitation of the human interaction device, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and may also be a volatile computer-readable storage medium, having stored therein instructions, which, when executed on a computer, cause the computer to perform the steps of the human-computer interaction method.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a rom (rom), a random access memory (RMN), a magnetic disk, and an optical disk.

In the examples provided herein, it is to be understood that the disclosed methods may be practiced otherwise than as specifically described without departing from the spirit and scope of the present application. The present embodiment is an exemplary example only, and should not be taken as limiting, and the specific disclosure should not be taken as limiting the purpose of the application. For example, some features may be omitted, or not performed.

The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

The above detailed description is provided for a human-computer interaction method, apparatus, device and storage medium provided by the embodiments of the present invention, and a specific example is applied in this document to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention. Although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种云游戏的实现方法、装置、系统、设备及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类