Control method and device of virtual robot, vehicle, equipment and storage medium

文档序号:1411593 发布日期:2020-03-10 浏览:24次 中文

阅读说明:本技术 虚拟机器人的控制方法、装置、车辆、设备和存储介质 (Control method and device of virtual robot, vehicle, equipment and storage medium ) 是由 李娜 周欢 许晓冬 于 2018-08-31 设计创作,主要内容包括:本发明提出一种虚拟机器人的控制方法、装置、车辆、设备和存储介质,其中,方法包括:获取源数据;从源数据中提取触发事件;控制虚拟机器人执行与触发事件匹配的目标行为,其中,目标行为包括动作、表情和语音中的至少一个。该方法根据获取到的源数据控制虚拟机器人做出不同状态下的动作和表情,极大丰富了虚拟机器人可以展现的动作和表情,并搭配语音输出,使虚拟机器人更加生动立体并附有情感,提高了用户与虚拟机器人交互的趣味性。(The invention provides a control method, a control device, a vehicle, equipment and a storage medium of a virtual robot, wherein the method comprises the following steps: acquiring source data; extracting a trigger event from the source data; and controlling the virtual robot to execute target behaviors matched with the trigger events, wherein the target behaviors comprise at least one of actions, expressions and voice. According to the method, the virtual robot is controlled to make actions and expressions in different states according to the acquired source data, the actions and expressions which can be shown by the virtual robot are greatly enriched, and voice output is matched, so that the virtual robot is more vivid and stereoscopic and has emotion, and the interestingness of interaction between a user and the virtual robot is improved.)

1. A control method of a virtual robot is characterized by comprising the following steps:

acquiring source data;

extracting a trigger event from the source data;

controlling the virtual robot to execute a target behavior matched with the trigger event; wherein the target behavior comprises at least one of an action, an expression, and a voice.

2. The method of claim 1, further comprising:

identifying a current state of the virtual robot, wherein the current state is one of a sleep state, a play state, a learning state, a working state and an event state;

the controlling the virtual robot to execute the target behavior matched with the trigger event comprises the following steps:

judging whether the state of the virtual robot needs to be switched or not according to the trigger event;

if the state of the virtual robot does not need to be switched, matching the target behavior from the behaviors included in the current state according to the trigger event;

if the state of the virtual robot needs to be switched, switching from the current state to a target state, and matching the target behavior from the behaviors included in the target state according to the trigger event;

and controlling the virtual robot to execute the target behavior.

3. The method of claim 2, wherein after identifying the current state of the virtual robot, further comprising:

and monitoring the state data of the virtual robot and/or the state data of the vehicle in real time, determining the next state of the virtual robot according to the state data of the virtual robot and/or the vehicle, and controlling the virtual robot to switch from the current state to the next state.

4. The method of claim 2, wherein each state of the virtual robot includes at least one state behavior matching a state;

after the controlling the virtual robot to execute the target behavior, the method further includes:

and controlling the virtual robot to randomly execute at least one state behavior in the current state.

5. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:

judging whether the source data comprises a voice interaction request, if so, extracting keywords from the voice interaction request, and determining a target interaction scene corresponding to the voice interaction event according to the keywords;

and determining the trigger event according to the target interaction scene.

6. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:

extracting a voice control request from the source data, and extracting vehicle components and control instructions to be controlled from the voice control request;

determining the triggering event according to the vehicle component and the control instruction.

7. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:

extracting vehicle driving data and driver's operation data from the source data;

determining the current driving state of the vehicle according to the vehicle driving data and the operation data of the driver;

and determining the trigger event according to the current driving state.

8. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:

extracting state data of the vehicle from the source data;

judging whether the vehicle has a fault according to the state data of the vehicle;

and if the fault exists, identifying the fault type of the vehicle, and determining the trigger event according to the fault type.

9. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:

extracting environment data of the environment where the vehicle is located from the source data;

identifying the environmental state of the environment according to the environmental data;

determining the trigger event according to the environment state.

10. The method of claim 2, further comprising, after the virtual robot has performed the target behavior:

and judging whether the virtual robot carries out state switching before executing the target behavior, and if the virtual robot carries out the state switching, controlling the virtual robot to return to the state before switching.

11. A control device for a virtual robot, comprising:

the data acquisition module is used for acquiring source data;

the extraction module is used for extracting the trigger event from the source data;

the control module is used for controlling the virtual robot to execute the target behavior matched with the trigger event; wherein the target behavior comprises at least one of an action, an expression, and a voice.

12. A vehicle characterized by comprising the control device of the virtual robot according to claim 11.

13. An electronic device comprising a memory, a processor;

wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the control method of the virtual robot according to any one of claims 1 to 10.

14. A non-transitory computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing a method of controlling a virtual robot according to any one of claims 1-10.

Technical Field

The present invention relates to the field of vehicle control technologies, and in particular, to a method and an apparatus for controlling a virtual robot, a vehicle, a device, and a storage medium.

Background

Along with the improvement of the popularization rate of vehicles, the vehicles gradually become an indispensable part of people in the trip, therefore, people's attention is more and more drawn to intelligence and interest of controlling the vehicles, and users increasingly need emotional accompaniments in the process of driving the vehicles.

Disclosure of Invention

The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.

To this end, a first object of the present invention is to provide a method for controlling a virtual robot. The method determines a trigger event for changing the form of the virtual robot according to source data acquired by a CAN network of the whole vehicle, and further controls the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises preset action expressions and voices, so that the virtual robot is controlled to execute different actions and expressions according to a specific scene where a user is located, the number of the actions and the expressions which CAN be displayed by the virtual robot is greatly enriched, the actions of the virtual robot are more vivid and vivid without being limited by conditions, the virtual robot is enabled to be more vivid and stereoscopic and attached with emotion by combining the interaction of the expressions and the voices of the virtual robot, and the interestingness of interaction between the user and the virtual robot is improved.

A second object of the present invention is to provide a control device for a virtual robot.

A third object of the invention is to propose a vehicle.

A fourth object of the invention is to propose an electronic device.

A fifth object of the invention is to propose a non-transitory computer-readable storage medium.

In order to achieve the above object, an embodiment of a first aspect of the present invention provides a method for controlling a virtual robot, including:

acquiring source data;

extracting a trigger event from the source data;

controlling the virtual robot to execute a target behavior matched with the trigger event; wherein the target behavior comprises at least one of an action, an expression and a voice.

In addition, the control method of the virtual robot according to the above embodiment of the present invention may further include the following additional technical features:

in one embodiment of the present invention, controlling the virtual robot to execute the target behavior matched with the trigger event includes: identifying a current state of the virtual robot, wherein the current state is one of a sleep state, a play state, a learning state, a working state and an event state; judging whether the state of the virtual robot needs to be switched or not according to the trigger event; if the state of the virtual robot does not need to be switched, matching the target behavior from the behaviors included in the current state according to the trigger event; if the state of the virtual robot needs to be switched, switching from the current state to a target state, and matching the target behavior from the behaviors included in the target state according to the trigger event; and controlling the virtual robot to execute the target behavior.

In an embodiment of the present invention, after identifying the current state of the virtual robot, the method further includes: and monitoring the state data of the virtual robot and/or the state data of the vehicle in real time, determining the next state of the virtual robot according to the state data of the virtual robot and/or the vehicle, and controlling the virtual robot to switch from the current state to the next state.

In one embodiment of the present invention, the control method of the virtual robot further includes: each state of the virtual robot comprises at least one state behavior matched with the state; after controlling the virtual robot to execute the target behavior, the method further includes: and controlling the virtual robot to randomly execute at least one state behavior in the current state.

In one embodiment of the present invention, extracting the trigger event from the source data includes: judging whether the source data comprises a voice interaction request, if so, extracting keywords from the voice interaction request, and determining a target interaction scene corresponding to the voice interaction event according to the keywords; and determining the trigger event according to the target interaction scene.

In one embodiment of the present invention, extracting the trigger event from the source data includes: extracting a voice control request from the source data, and extracting vehicle components and control instructions to be controlled from the voice control request; determining the triggering event according to the vehicle component and the control instruction.

In one embodiment of the present invention, extracting the trigger event from the source data includes: extracting vehicle driving data and driver's operation data from the source data; determining the current driving state of the vehicle according to the vehicle driving data and the operation data of the driver; and determining the trigger event according to the current driving state.

In one embodiment of the present invention, extracting the trigger event from the source data includes: extracting state data of the vehicle from the source data; judging whether the vehicle has a fault according to the state data of the vehicle; and if the fault exists, identifying the fault type of the vehicle, and determining the trigger event according to the fault type.

In one embodiment of the present invention, extracting the trigger event from the source data includes: extracting environment data of the environment where the vehicle is located from the source data; identifying the environmental state of the environment according to the environmental data; determining the trigger event according to the environment state.

In an embodiment of the present invention, after the virtual robot executes the target behavior, the method further includes: and judging whether the virtual robot carries out state switching before executing the target behavior, and if the virtual robot carries out the state switching, controlling the virtual robot to return to the state before switching.

The control method of the virtual robot in the embodiment of the invention comprises the steps of firstly obtaining source data, then extracting a trigger event from the source data, and finally controlling the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises at least one of action, expression and voice. The method determines a trigger event for changing the form of the virtual robot according to source data acquired by a CAN network of the whole vehicle, and further controls the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises preset action expressions and voices, so that the virtual robot is controlled to execute different actions and expressions according to a specific scene where a user is located, the number of the actions and the expressions which CAN be displayed by the virtual robot is greatly enriched, the actions of the virtual robot are more vivid and vivid without being limited by conditions, the virtual robot is enabled to be more vivid and stereoscopic and attached with emotion by combining the interaction of the expressions and the voices of the virtual robot, and the interestingness of interaction between the user and the virtual robot is improved.

In order to achieve the above object, a second embodiment of the present invention provides a control apparatus for a virtual robot, including:

the data acquisition module is used for acquiring source data;

the extraction module is used for extracting the trigger event from the source data;

and the control module is used for controlling the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises at least one of action, expression and voice.

The control device of the virtual robot in the embodiment of the invention firstly obtains source data, then extracts the trigger event from the source data, and finally controls the virtual robot to execute the target behavior matched with the trigger event, wherein the target behavior comprises at least one of action, expression and voice. The device confirms the trigger event that changes virtual robot form according to the source data that whole car CAN network gathered, and then control virtual robot and carry out the target action that matches with the trigger event, the target action includes predetermined action expression and pronunciation, thereby control virtual robot and carry out different action and expression according to the concrete scene that the user is located, the action that has greatly enriched virtual robot CAN show and the quantity of expression, and virtual robot's action is not restricted by the condition more vivid image, combine virtual robot's expression and pronunciation interdynamic, make virtual robot more vivid three-dimensional and with the emotion, the interest of user with virtual robot interaction has been improved.

In order to achieve the above object, an embodiment of a third aspect of the present invention proposes a vehicle including the control device of a virtual robot as described in the above embodiments.

In order to achieve the above object, a fourth aspect of the present invention provides an electronic device, including a processor and a memory, wherein the processor runs a program corresponding to an executable program code by reading the executable program code stored in the memory, so as to implement the control method of the virtual robot as described in the above embodiments.

In order to achieve the above object, a fifth aspect embodiment of the present invention proposes a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the control method of a virtual robot as described in the above embodiments.

Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.

Drawings

The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:

fig. 1 is a schematic structural diagram of a virtual robot architecture according to an embodiment of the present invention;

fig. 2 is a schematic flowchart of a control method for a virtual robot according to an embodiment of the present invention;

fig. 3 is a schematic diagram illustrating state transition of a virtual robot according to an embodiment of the present invention;

fig. 4 is a schematic diagram illustrating music listening actions of a virtual robot according to an embodiment of the present invention;

fig. 5 is a schematic diagram illustrating a walking action of a virtual robot according to an embodiment of the present invention;

fig. 6 is a schematic diagram illustrating a defogging operation of a virtual robot according to an embodiment of the present invention;

fig. 7 is a schematic diagram of a fan operation of a virtual robot according to an embodiment of the present invention;

fig. 8 is a schematic structural diagram of a control apparatus of a virtual robot according to an embodiment of the present invention; and

fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.

Detailed Description

Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.

The following describes a control method, apparatus, and device of a virtual robot according to an embodiment of the present invention with reference to the drawings.

The virtual robot in the embodiment of the invention is a virtual robot running on the vehicle-mounted multimedia, and the virtual robot can show different images, actions and expressions through a display screen of the vehicle-mounted multimedia. The control method of the virtual robot in the embodiment of the present invention may be executed by the virtual robot architecture provided in the embodiment of the present invention.

Fig. 1 is a schematic diagram illustrating a connection between a virtual robot and an external device according to an embodiment of the present invention, where as shown in fig. 1, the virtual robot runs on a vehicle-mounted multimedia, and the vehicle-mounted multimedia is used as a main running carrier of the virtual robot, and can establish a connection with an artificial intelligence platform through a mobile 4G/5G network or a wireless access network, so that the virtual robot can acquire data in the network and interact with the artificial intelligence platform. The whole vehicle electronic equipment is accessed to a whole vehicle CAN network through a CAN gateway, and then is connected with the vehicle-mounted multimedia, so that the virtual robot CAN acquire local data such as vehicle state data, driving data, environment data and the like.

Fig. 2 is a flowchart illustrating a control method of a virtual robot according to an embodiment of the present invention. As shown in fig. 2, the method includes:

step 101, source data is acquired.

The source data includes vehicle-mounted data acquired while the user uses the vehicle, operation data of the driver, environment data, a voice request transmitted by the user, and the like.

The vehicle-mounted data may include, among others, vehicle travel data, warning information of the vehicle, and status data of the vehicle itself. For example, the vehicle driving data may include vehicle driving speed, driving time, fuel consumption, and the like; the warning information of the vehicle can comprise whether a user fastens a safety belt or alarm information sent by an instrument detection module, and the like; the environment data can include weather data, temperature data inside and outside the vehicle, environment quality data or road data and traffic data, etc., such as ambient temperature outside the vehicle, air quality inside the vehicle or traffic conditions on the road section ahead, etc.; the operation data of the driver can comprise behavior data of a user for controlling the vehicle device or operation data of the vehicle-mounted multimedia, and the like, such as air conditioner temperature set by the user, gear position hung by the user or click position on a vehicle-mounted multimedia display screen; the voice request can be voice interaction information such as chat content, query request or control instruction and the like sent by the user through the audio equipment.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:驾驶辅助系统和驾驶辅助方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!