Virtual interaction method, entity robot, display terminal and system

文档序号:1409661 发布日期:2020-03-06 浏览:2次 中文

阅读说明:本技术 虚拟交互的方法、实体机器人、显示终端及系统 (Virtual interaction method, entity robot, display terminal and system ) 是由 刘利剑 于 2018-12-03 设计创作,主要内容包括:一种虚拟交互的方法、实体机器人、显示终端及系统,以优化人机交互的体验。所述虚拟交互的方法包括:获得第一实体机器人上的至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据,所述当前测量范围随所述第一实体机器人在真实场景中的运动而改变(S11);根据所述至少一个传感器测得的数据,绘制所述当前测量范围内的真实场景对应的虚拟场景,以通过显示终端显示所述虚拟场景(S12)。(A virtual interaction method, an entity robot, a display terminal and a system are provided to optimize the experience of human-computer interaction. The virtual interaction method comprises the following steps: obtaining data measured by at least one sensor on a first physical robot measuring a real scene within a current measurement range, the current measurement range changing with movement of the first physical robot in the real scene (S11); and drawing a virtual scene corresponding to the real scene in the current measurement range according to the data measured by the at least one sensor, so as to display the virtual scene through a display terminal (S12).)

1. A method of virtual interaction, the method comprising:

obtaining data measured by measuring a real scene in a current measurement range by at least one sensor on a first physical robot, wherein the current measurement range changes along with the movement of the first physical robot in the real scene;

and drawing a virtual scene corresponding to the real scene in the current measurement range according to the data measured by the at least one sensor, so as to display the virtual scene through a display terminal.

2. The method of claim 1, wherein the at least one sensor comprises a position sensor; the method further comprises the following steps:

drawing a first virtual robot corresponding to the first entity robot in the virtual scene according to the data measured by the position sensor so as to display the virtual scene containing the first virtual robot through the display terminal;

wherein the motion of the first virtual robot in the virtual scene is synchronized with the motion of the first physical robot in the real scene.

3. The method of claim 2, further comprising:

drawing a virtual component in the virtual scene according to the data measured by the at least one sensor so as to display the virtual scene containing the virtual component through the display terminal;

obtaining a first control instruction for the first physical robot, the first control instruction being for causing the first virtual robot to interact with the virtual component in the virtual scene by controlling the first physical robot and the first virtual robot to move synchronously;

controlling the first virtual robot to interact with the virtual component in the virtual scene in response to the first control instruction.

4. The method of claim 1, further comprising:

obtaining respective position data of a plurality of other physical robots located in the same real scene as the first physical robot;

drawing, in the virtual scene, other virtual robots corresponding to the other physical robots based on the position data of the other physical robots, the other physical robots being different from the first physical robot, so as to display the virtual scene including the other virtual robots through a display terminal.

5. The method of claim 4, further comprising:

obtaining a second control instruction for the first entity robot, wherein the second control instruction is used for controlling the first entity robot and a first virtual robot corresponding to the first entity robot to synchronously move so that the first virtual robot interacts with other virtual robots in the virtual scene;

and responding to the second control instruction, and controlling the first virtual robot to interact with other virtual robots in the virtual scene.

6. The method of claim 3, wherein the first physical robot is adapted to a remote control; obtaining a first control instruction for a first physical robot, comprising:

a first remote control instruction from the remote control is obtained.

7. The method of claim 3, wherein obtaining the first control instruction for the first physical robot comprises:

acquiring touch operation acquired by touch equipment;

and processing the touch operation to obtain the first control instruction.

8. The method of claim 3, wherein obtaining the first control instruction for the first physical robot comprises:

acquiring a gesture image acquired by image acquisition equipment;

and processing the gesture image to obtain the first control instruction.

9. The method of claim 3, wherein obtaining the first control instruction for the first physical robot comprises:

acquiring audio data acquired by audio acquisition equipment;

and processing the audio data to obtain the first control instruction.

10. A physical robot, comprising:

the sensor is used for measuring a real scene in a current measuring range;

a processor connected to the at least one sensor for obtaining data measured by the at least one sensor measuring the real scene in the current measurement range, and performing the method of any one of claims 1-10.

11. A display terminal, comprising:

the communication component is used for communicating with a first entity robot to obtain data measured by measuring a real scene in a current measuring range by at least one sensor on the first entity robot;

a processor for performing the method of any one of claims 1-9;

and the display component is connected with the processor and is used for displaying the virtual scene corresponding to the real scene in the current measurement range.

12. The display terminal of claim 11, wherein the display component is a touch screen for collecting touch operations; or a touch pad is integrated in the display terminal, connected with the processor and used for acquiring touch operation.

13. The display terminal according to claim 11, wherein an image capturing component is integrated in the display terminal, and is connected to the processor, for capturing the gesture image.

14. The display terminal of claim 11, wherein an audio acquisition component is integrated in the display terminal, and is connected to the processor for acquiring audio data.

15. The display terminal of claim 11, wherein the display terminal is smart glasses, a smart phone, or a tablet.

16. A system for virtual interaction, the system comprising:

the first entity robot is provided with at least one sensor and is used for measuring a real scene in a current measuring range;

a data processing server, connected to the first physical robot, for performing the method of any of claims 1-10.

17. The system of claim 16, further comprising:

and the display terminal is connected with the data processing server and is used for displaying the virtual scene corresponding to the real scene in the current measurement range.

18. The system of claim 16, further comprising:

and the remote controller is adapted to the first entity robot and used for generating a first remote control instruction.

19. The system of claim 16, further comprising:

and the touch equipment is connected with the data processing server and is used for acquiring touch operation.

20. The system of claim 16, further comprising:

and the image acquisition equipment is connected with the data processing server and is used for acquiring the gesture image.

21. The system of claim 16, further comprising:

and the audio acquisition equipment is connected with the data processing server and is used for acquiring audio data.

Technical Field

The embodiment of the application relates to the field of robots, in particular to a virtual interaction method, an entity robot, a display terminal and a system.

Background

With the popularization of robots, more and more occasions exist in which people need to interact with the robots in daily work and life.

One common scenario is: the user and the entity robot are in the same real scene, the distance between the user and the entity robot is short, and the user uses the remote controller to remotely control the entity robot. However, this human-computer interaction method requires that the distance between the user and the physical robot cannot exceed the coverage of the remote control signal, and cannot be used if the distance between the user and the physical robot exceeds the coverage of the remote control signal.

Another common scenario is: the interaction of a user with a virtual robot in a virtual scene is simulated. However, the virtual scene in the man-machine interaction mode is artificially designed in advance and is irrelevant to a real scene, so that the experience brought to the user is not real enough.

Disclosure of Invention

The application provides a virtual interaction method, an entity robot, a display terminal and a system, which are used for optimizing human-computer interaction experience.

A first aspect of an embodiment of the present application provides a method for virtual interaction, where the method includes:

obtaining data measured by measuring a real scene in a current measurement range by at least one sensor on a first physical robot, wherein the current measurement range changes along with the movement of the first physical robot in the real scene;

and drawing a virtual scene corresponding to the real scene in the current measurement range according to the data measured by the at least one sensor, so as to display the virtual scene through a display terminal.

A second aspect of embodiments of the present application provides an entity robot, including:

the sensor is used for measuring a real scene in a current measuring range;

and the processor is connected with the at least one sensor and used for obtaining data measured by the at least one sensor for measuring the real scene in the current measurement range and executing the method of the first aspect of the application.

A third aspect of the embodiments of the present application provides a display terminal, including:

the communication component is used for communicating with a first entity robot to obtain data measured by measuring a real scene in a current measuring range by at least one sensor on the first entity robot;

a processor for use in the method of the first aspect of the application;

and the display component is connected with the processor and is used for displaying the virtual scene corresponding to the real scene in the current measurement range.

A fourth aspect of the present embodiment provides a virtual interaction system, including:

the first entity robot is provided with at least one sensor and is used for measuring a real scene in a current measuring range;

a data processing server connected to the first entity robot for use in the method of the first aspect of the present application.

A fifth aspect of embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, performs the steps in the method according to the first aspect of the present application.

By adopting the technical scheme, according to data measured by measuring the real scene in the current measuring range by the sensor on the physical robot, the corresponding virtual scene is drawn and displayed by the display terminal, the user can really experience the real scene around the physical robot by watching the virtual scene displayed by the display terminal, and the user is really brought into the real scene around the physical robot. And along with the movement of the physical robot in the real scene, the sensor on the physical robot measures the real scene in the current measuring range and the measured data synchronously changes, the drawn virtual scene also synchronously changes and is displayed through the display terminal, and a user can real-timely experience the real scene around the physical robot by watching the real-time changed virtual scene displayed by the display terminal.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.

FIG. 1 is a flow chart of a method of virtual interaction as set forth in an embodiment of the present application;

FIG. 2 is a flow chart of a method of virtual interaction as set forth in another embodiment of the present application;

FIG. 3 is a flow chart of a method of virtual interaction as set forth in another embodiment of the present application;

FIG. 4 is a flow chart of a method of virtual interaction as set forth in another embodiment of the present application;

FIG. 5 is a flow chart of a method of virtual interaction as set forth in another embodiment of the present application;

FIG. 6 is a schematic diagram of a physical robot provided in an embodiment of the present application;

FIG. 7 is a diagram of a display terminal according to an embodiment of the present application;

fig. 8 is a schematic diagram of a system for virtual interaction provided by an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the various embodiments of the present application without inventive step, are within the scope of the present application.

First, an embodiment of the present application provides a method for virtual interaction, which may be performed by a processor having an information processing function. The processor may be disposed inside a physical robot (e.g., a first physical robot in various embodiments described below, or any physical robot except the first physical robot), the processor may be disposed inside a display terminal (e.g., a terminal having both a display function and an information processing function), or the processor may be disposed inside a data processing server (e.g., a server having a data processing function).

Referring to fig. 1, fig. 1 is a flowchart of a virtual interaction method according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:

step S11: obtaining data measured by measuring a real scene in a current measurement range by at least one sensor on a first physical robot, wherein the current measurement range changes along with the movement of the first physical robot in the real scene;

step S12: and drawing a virtual scene corresponding to the real scene in the current measurement range according to the data measured by the at least one sensor, so as to display the virtual scene through a display terminal.

In this embodiment, at least one sensor is disposed on the first physical robot, and the at least one sensor may be a real scene measurement sensor, that is, a sensor for measuring a real scene around the first physical robot. Illustratively, the at least one sensor includes, but is not limited to: image sensors, cameras, angular velocity sensors, infrared sensors, laser radars, and the like. Accordingly, obtaining data measured by at least one sensor on the first physical robot includes, but is not limited to: depth of field data, orientation data, color data, etc.

It will be appreciated that as the first physical robot moves in the real scene, the current measurement range of the at least one sensor changes accordingly. For example, assuming that the first physical robot walks in a house in the real world, as the first physical robot moves from the southeast corner to the northwest corner of the house, the current measurement range of the at least one sensor also changes from the southeast corner to the northwest corner of the house, and accordingly, the data measured by the at least one sensor on the first physical robot is obtained also changes. That is, the data measured by the at least one sensor is real-time variable, is synchronized with the real scene of the current surroundings of the first physical robot, and is data representing the real scene of the current surroundings of the first physical robot.

After obtaining the data measured by the at least one sensor on the first physical robot, step S12 is executed to draw a virtual scene corresponding to the real scene within the current measurement range of the at least one sensor. The related art may be referred to as a specific method for rendering a virtual scene. It is understood that, as the data measured by the at least one sensor changes in real time in step S11, the virtual scene drawn accordingly also changes in real time and is synchronized with the real scene around the first physical robot. The drawn virtual scene is displayed through the display terminal.

By adopting the technical scheme, according to data measured by measuring the real scene in the current measuring range by the sensor on the physical robot, the corresponding virtual scene is drawn and displayed by the display terminal, the user can really experience the real scene around the physical robot by watching the virtual scene displayed by the display terminal, and the user is really brought into the real scene around the physical robot. And along with the movement of the physical robot in the real scene, the sensor on the physical robot measures the real scene in the current measuring range and the measured data synchronously changes, the drawn virtual scene also synchronously changes and is displayed through the display terminal, and a user can real-timely experience the real scene around the physical robot by watching the real-time changed virtual scene displayed by the display terminal.

In combination with the above embodiment, in another embodiment of the present application, the at least one sensor includes a position sensor; referring to fig. 2, fig. 2 is a flowchart of a virtual interaction method according to another embodiment of the present application. As shown in fig. 2, the method includes the following steps in addition to steps S11-S12:

step S13: according to the data measured by the position sensor, drawing a first virtual robot corresponding to the first entity robot in the virtual scene, so as to display the virtual scene containing the first virtual robot through the display terminal.

Wherein the motion of the first virtual robot in the virtual scene is synchronized with the motion of the first physical robot in the real scene.

In this embodiment, the at least one sensor further comprises a position sensor. In this way, after step S12 is completed, the first virtual robot corresponding to the first physical robot may be further rendered in the rendered virtual scene according to the data measured by the position sensor. The correspondence between the first physical robot and the first virtual robot is as follows: the motion of the first physical robot in the real scene is synchronous with the motion of the first virtual robot in the drawn virtual scene, that is, the first virtual robot is a mapping obtained by mapping the first physical robot to the drawn virtual scene.

It is understood that as the first physical robot moves in the real scene, the data measured by the position sensor on the first physical robot changes. As the data measured by the position sensor on the first physical robot changes in real time, the first virtual robot mapped in step S13 correspondingly changes in real time and is synchronized with the motion of the first physical robot.

By adopting the technical scheme, the virtual robot corresponding to the entity robot is superposed in the drawn virtual scene and is displayed through the display terminal. The user can really experience the real scene around the entity robot and know the position of the entity robot in the real scene around the entity robot by watching the virtual scene containing the virtual robot displayed by the display terminal, and the visual interest is improved because the virtual scene contains the virtual robot.

And along with the movement of the physical robot in the real scene, the data measured by the position sensor on the physical robot synchronously changes, the drawn virtual robot synchronously moves and is displayed through the display terminal, and a user can visually perceive the movement of the physical robot in the real scene in real time by watching the virtual robot which is displayed by the display terminal and synchronously moves with the physical robot.

In combination with the above embodiments, in another embodiment of the present application, referring to fig. 3, fig. 3 is a flowchart of a method for virtual interaction provided in another embodiment of the present application. As shown in fig. 3, the method includes the following steps in addition to steps S11-S13:

step S14: drawing a virtual component in the virtual scene according to the data measured by the at least one sensor so as to display the virtual scene containing the virtual component through the display terminal;

step S15: obtaining a first control instruction for the first physical robot, the first control instruction being for causing the first virtual robot to interact with the virtual component in the virtual scene by controlling the first physical robot and the first virtual robot to move synchronously;

step S16: controlling the first virtual robot to interact with the virtual component in the virtual scene in response to the first control instruction.

In this embodiment, the virtual component is a virtual component with an interactive function. Specifically, the virtual component is a virtual component with an interactive function drawn according to data measured by at least one sensor on the first physical robot.

In one embodiment, after step S12 is completed, the virtual component may be further rendered in the rendered virtual scene, so that the virtual component is superimposed on the rendered virtual scene and displayed through the display terminal. The user can really experience the real scene around the entity robot by watching the virtual scene containing the virtual components displayed by the display terminal, and the visual interest is improved because the virtual scene contains the virtual components.

In another embodiment, after step S12 is completed, the virtual component may also be drawn in the real scene currently around the user. Therefore, on one hand, the user can really experience the real scene around the entity robot by watching the virtual scene displayed by the display terminal, on the other hand, the user can see the virtual component in the real scene around the user, so that the user can conveniently combine the seen virtual scene with the virtual component, and the visual richness and the interestingness are improved.

In one embodiment, the virtual scene containing the virtual component is displayed through the display terminal, and the user may make a control operation for the first physical robot by viewing the virtual scene containing the virtual component displayed through the display terminal, if the user wants to experience the interactive function of the virtual component, so that the processor executes step S15 to obtain the first control instruction.

In another embodiment, there are other physical robots in the real scene where the first physical robot is located, that is, there are multiple physical robots located in the same real scene as the first physical robot, and if the user wants to experience the interaction of the multiple physical robots located in the same real scene in the drawn virtual scene, a control operation may also be performed with respect to the first physical robot, so that the processor performs a step similar to step S15 to obtain the second control instruction.

Specifically, the processor obtains the first control instruction, and there are and are not limited to the following various embodiments:

the first embodiment: and obtaining a first remote control instruction from a remote controller, wherein the first entity robot is adapted to the remote controller.

The second embodiment: acquiring touch operation acquired by touch equipment; and processing the touch operation to obtain the first control instruction.

Third embodiment: acquiring a gesture image acquired by image acquisition equipment; and processing the gesture image to obtain the first control instruction.

Fourth embodiment: acquiring audio data acquired by audio acquisition equipment; and processing the audio data to obtain the first control instruction.

In the following, how the processor controls the virtual robot to interact with the virtual components in the above four embodiments will be described.

(1) In the case that the user holds the remote controller adapted to the first physical robot in hand and the distance to the first physical robot is within the coverage range of the remote control signal:

the user can press a key on the remote controller, so that the remote controller generates a first remote control instruction and transmits the first remote control instruction to the processor. And after receiving the first remote control instruction, the processor controls the first entity robot to move, indirectly controls the first virtual robot to move synchronously, and further controls the first virtual robot to interact with the virtual assembly.

(2) Aiming at the condition that no remote controller matched with the first entity robot exists at the hand of the user or the distance between the user and the first entity robot exceeds the coverage range of the remote control signal:

a) if the processor is connected with the touch equipment, the user can make touch operation, the touch equipment acquires the touch operation of the user and transmits the touch operation to the processor, the processor processes the touch operation and then determines a first control instruction, then controls the first entity robot to move, indirectly controls the first virtual robot to move synchronously, and further controls the first virtual robot to interact with the virtual component.

b) If the processor is connected with the image acquisition equipment, a user can make a gesture, the image acquisition equipment acquires a gesture image of the user and transmits the gesture image to the processor, the processor processes the gesture image and then determines a first control instruction, then controls the first entity robot to move, indirectly controls the first virtual robot to move synchronously, and further controls the first virtual robot to interact with the virtual assembly.

c) If the processor is connected with the audio acquisition equipment, the user can speak the audio corresponding to the first control command, the audio acquisition equipment transmits the acquired audio data to the processor, the processor determines a first control instruction after processing the audio data, then controls the first entity robot to move, indirectly controls the first virtual robot to move synchronously, and further controls the first virtual robot to interact with the virtual assembly.

By adopting the technical scheme, the user controls the entity robot to move in the real scene in the modes of pressing the remote controller, performing touch operation, performing gestures or speaking audio and the like, so that the virtual robot corresponding to the entity robot synchronously moves in the drawn virtual scene, the user controls the entity robot, the corresponding virtual robot interacts with the virtual assembly in the drawn virtual scene, and the interest of man-machine interaction is improved.

In combination with the above embodiments, in another embodiment of the present application, referring to fig. 4, fig. 4 is a flowchart of a method for virtual interaction provided in another embodiment of the present application. As shown in fig. 4, the method includes the following steps in addition to steps S11-S12:

s13': obtaining respective position data of a plurality of other physical robots located in the same real scene as the first physical robot;

s14': drawing, in the virtual scene, other virtual robots corresponding to the other physical robots based on the position data of the other physical robots, the other physical robots being different from the first physical robot, so as to display the virtual scene including the other virtual robots through a display terminal.

In this embodiment, a plurality of other physical robots also exist in the real scene where the first physical robot is located, that is, a plurality of physical robots located in the same real scene as the first physical robot exist, and in order to enable the user to see respective positions of the plurality of other physical robots in the real scene where the first physical robot is located, the processor may obtain respective position data of the plurality of other physical robots located in the same real scene as the first physical robot. Specifically, a plurality of other physical robots located in the same real scene with the first physical robot each have a position sensor and are all connected to the processor, and the position sensors of the plurality of other physical robots located in the same real scene with the first physical robot transmit measured position data to the processor.

After the processor obtains the position data of each of the plurality of other physical robots and performs step S12, the processor may continue to draw other virtual robots corresponding to each of the plurality of other physical robots in the drawn virtual scene. Drawing other virtual robots corresponding to other physical robots is similar to drawing the first virtual robot corresponding to the first physical robot, and is not repeated here.

By adopting the technical scheme, other virtual robots corresponding to other entity robots in the same real scene with the entity robot are superposed in the drawn virtual scene and displayed through the display terminal. The user can know the positions of other entity robots in the real scene by watching the virtual scene which is displayed by the display terminal and contains other virtual robots corresponding to the other entity robots, so that the visual interest is improved.

In another embodiment, steps S13 '-S14' and S13 may be performed, so that all physical robots are drawn in the drawn virtual scene and displayed through the display terminal. The user can know the relative positions of all the entity robots in the real scene by watching the virtual scene which is displayed by the display terminal and contains the virtual robots corresponding to all the entity robots, so that the visual interest is improved.

In combination with the above embodiments, in another embodiment of the present application, referring to fig. 5, fig. 5 is a flowchart of a method for virtual interaction provided in another embodiment of the present application. As shown in fig. 5, the method includes the following steps in addition to the steps S11-S12 and S13 '-S14':

step S15': obtaining a second control instruction for the first entity robot, wherein the second control instruction is used for controlling the first entity robot and a first virtual robot corresponding to the first entity robot to synchronously move so that the first virtual robot interacts with other virtual robots in the virtual scene;

step S16': and responding to the second control instruction, and controlling the first virtual robot to interact with other virtual robots in the virtual scene.

In this embodiment, after all the physical robots are drawn in the drawn virtual scene and corresponding to the respective virtual robots and displayed through the display terminal, so that the user knows the relative positions of all the physical robots in the real scene, if the user wants to experience the interaction of multiple physical robots located in the same real scene in the drawn virtual scene, the control operation can be performed on the first physical robot, so that the processor executes a step similar to step S15 to obtain the second control instruction.

The following describes how the processor controls the first virtual robot to interact with other virtual robots.

(1) In the case that the user holds the remote controller adapted to the first physical robot in hand and the distance to the first physical robot is within the coverage range of the remote control signal:

the user can press a key on the remote controller, so that the remote controller generates a first remote control instruction and transmits the first remote control instruction to the processor. And after receiving the first remote control instruction, the processor controls the first entity robot to move, indirectly controls the first virtual robot to move synchronously, and further controls the first virtual robot to interact with other virtual robots.

(2) Aiming at the condition that no remote controller matched with the first entity robot exists at the hand of the user or the distance between the user and the first entity robot exceeds the coverage range of the remote control signal:

a) if the processor is connected with the touch equipment, the user can make touch operation, the touch equipment acquires the touch operation of the user and transmits the touch operation to the processor, the processor processes the touch operation and determines a first control instruction, then the processor controls the first entity robot to move, indirectly controls the first virtual robot to move synchronously, and then controls the first virtual robot to interact with other virtual robots.

b) If the processor is connected with the image acquisition equipment, a user can make a gesture, the image acquisition equipment acquires a gesture image of the user and transmits the gesture image to the processor, the processor processes the gesture image and determines a first control instruction, then the processor controls the first entity robot to move, indirectly controls the first virtual robot to move synchronously, and further controls the first virtual robot to interact with other virtual robots.

c) If the processor is connected with the audio acquisition equipment, the user can speak the audio corresponding to the first control command, the audio acquisition equipment transmits the acquired audio data to the processor, the processor processes the audio data and then determines a first control instruction, then the first entity robot is controlled to move, the first virtual robot is indirectly controlled to move synchronously, and then the first virtual robot is controlled to interact with other virtual robots.

By adopting the technical scheme, the user controls the entity robot to move in the real scene in the modes of pressing the remote controller, performing touch operation, performing gestures or speaking audio and the like, so that the virtual robot corresponding to the entity robot synchronously moves in the drawn virtual scene, the user controls the entity robot, the corresponding virtual robot interacts with other virtual robots in the drawn virtual scene, and the interest of man-machine interaction is improved.

Based on the same inventive concept, an embodiment of the present application provides a physical robot, which may be the first physical robot or any physical robot except the first physical robot in the foregoing embodiments. Referring to fig. 6, fig. 6 is a schematic diagram of a physical robot provided in an embodiment of the present application. As shown in fig. 6, the physical robot includes:

at least one sensor 601, configured to measure a real scene in a current measurement range;

and the processor 602 is connected to the at least one sensor, and is configured to obtain data measured by the at least one sensor by measuring a real scene in a current measurement range, and execute the method for virtual interaction according to the above embodiments of the present application.

Based on the same inventive concept, an embodiment of the present application provides a display terminal, and referring to fig. 7, fig. 7 is a schematic diagram of the display terminal provided in the embodiment of the present application. As shown in fig. 7, the display terminal includes:

a communication component 701, configured to communicate with a first physical robot to obtain data measured by at least one sensor on the first physical robot measuring a real scene in a current measurement range;

a processor 702 for implementing the virtual interaction method according to the above embodiments of the present application;

a display component 703, connected to the processor, for displaying a virtual scene corresponding to the real scene in the current measurement range.

Optionally, the display component is a touch screen for acquiring touch operations; or a touch pad is integrated in the display terminal, connected with the processor and used for acquiring touch operation.

Optionally, an image acquisition component is integrated in the display terminal, and is connected to the processor, and is used for acquiring the gesture image.

Optionally, an audio acquisition component is integrated in the display terminal, and is connected to the processor, and is used for acquiring audio data.

Optionally, the display terminal is a smart glasses, a smart phone, or a tablet computer.

Based on the same inventive concept, an embodiment of the present application provides a virtual interaction system. Referring to fig. 8, fig. 8 is a schematic diagram of a system for virtual interaction according to an embodiment of the present application. As shown in fig. 8, the system of virtual interaction includes:

a first physical robot 801 having at least one sensor for measuring a real scene within a current measurement range;

a data processing server 802, connected to said first physical robot, for use in the method according to the first aspect of the present application.

Optionally, as shown in fig. 8, the system further includes:

and the display terminal 803 is connected with the data processing server and is used for displaying a virtual scene corresponding to the real scene in the current measurement range.

Optionally, the system further comprises:

and the remote controller 804 is adapted to the first entity robot and is used for generating a first remote control instruction.

Optionally, the system further comprises:

and the touch equipment 805 is connected with the data processing server and used for acquiring touch operation.

Optionally, the system further comprises:

and the image acquisition device 806 is connected with the data processing server and is used for acquiring the gesture image.

Optionally, the system further comprises:

and an audio acquisition device 807 connected to the data processing server for acquiring audio data.

Based on the same inventive concept, another embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method according to any of the above-mentioned embodiments of the present application.

Based on the same inventive concept, another embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, the electronic device implements the steps of the method according to any of the above embodiments of the present application.

For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.

The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.

As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.

Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.

The method, the device, the storage medium and the electronic device for analyzing the evaluation data of the shop provided by the invention are introduced in detail, and a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于运行用于机动车的显示设备的方法以及机动车

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类