Unmanned aerial vehicle device as intelligent assistant

文档序号:1195486 发布日期:2020-09-01 浏览:4次 中文

阅读说明:本技术 一种作为智能助手的无人机装置 (Unmanned aerial vehicle device as intelligent assistant ) 是由 于振东 于 2019-02-22 设计创作,主要内容包括:本申请实施例公开了一种作为智能助手的无人机装置,涉及无人机技术领域。该系统的一具体实施包括:飞行控制单元,被配置成用于跟随目标物飞行,和/或,基于预设路线和/或预设规则和/或预设模型进行与目标物相关的巡逻飞行;多媒体交互单元,被配置成用于与目标物进行多媒体交互。使飞行中的无人机作为目标物或用户的智能助手,使用户在解放双手的同时实现高效率的信息获取和多媒体交互。(The embodiment of the application discloses unmanned aerial vehicle device as intelligent assistant relates to unmanned aerial vehicle technical field. One implementation of the system includes: a flight control unit configured to follow the target object and/or to perform patrol flight related to the target object based on a preset route and/or preset rules and/or a preset model; and the multimedia interaction unit is configured to perform multimedia interaction with the target object. The unmanned aerial vehicle in flight is used as an intelligent assistant for a target object or a user, so that the user can realize efficient information acquisition and multimedia interaction while liberating two hands.)

1. An unmanned aerial vehicle device as an intelligent assistant, comprising:

a flight control unit configured to follow a target object and/or to perform a patrol flight in relation to the target object based on a preset route and/or preset rules and/or a preset model;

and the multimedia interaction unit is configured to perform multimedia interaction with the target object.

2. The apparatus of claim 1, further comprising a networking communication unit configured to implement one or any combination of the following features:

networking with at least one third party device;

communicating with at least one third party device, sending data to and/or receiving data from the third party device.

3. The apparatus of claim 1, wherein the multimedia interaction with the object comprises one or any combination of the following features:

performing multimedia interaction directly with the target object;

and performing multimedia interaction with the target object through at least one third-party device.

4. The apparatus of claim 1 or 3, wherein the multimedia interaction with the object comprises:

and pushing or playing information to the target object based on a preset rule and/or a preset model.

5. The apparatus according to claim 1 or 3, wherein the multimedia interaction comprises the steps of:

acquiring first data, the first data comprising at least one of sound, images, 3D spatial data, sensor data, and data from a third party device;

identifying the collected first data to obtain an identification result;

sending the identification result to at least one third-party device, and/or sending a notice to the target object in a preset mode based on the identification result when the identification result meets a preset condition;

the preset mode comprises one or any combination of the following modes: sound, light, naked eye 3D visualization, image projection screen.

6. The apparatus according to claim 1 or 3, wherein the multimedia interaction comprises the steps of:

acquiring first information corresponding to the target object;

acquiring second information corresponding to the first information;

acquiring third information corresponding to the second information;

displaying the third information in a mode of image projection, image projection screen, VR, AR or naked eye 3D visualization, and/or playing the third information through sound, and/or carrying out light indication based on the third information;

the first information at least comprises one or any combination of the following information: sound, gestures, expressions, gestures, images, 3D spatial information, information obtained by communicating with a third party.

7. The apparatus according to claim 1 or 3, wherein the multimedia interaction comprises the steps of:

acquiring first information corresponding to the target object;

acquiring an instruction corresponding to the first information;

acquiring instruction parameters corresponding to the instructions;

and setting corresponding equipment or software according to the instruction and/or the instruction parameters, and/or communicating with the corresponding equipment according to the instruction and sending the instruction and/or the instruction parameters to the equipment, and then pushing the acquired response information of the equipment to the target object.

8. The apparatus of claim 1 or 3, wherein the multimedia interaction comprises:

and navigating and/or reminding the target object through multimedia interaction.

9. The apparatus of claim 1, wherein the multimedia interaction with the object comprises one or any combination of the following features:

performing multi-channel multimedia interaction directly with the target object;

and performing multi-channel multimedia interaction with the target object through at least one third-party device.

10. A drone comprising the apparatus of any one of claims 1-9.

Technical Field

The embodiment of the application relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle device as an intelligent assistant.

Background

With the development of scientific technology, unmanned aerial vehicles are widely used in entertainment, military, agriculture, education and other fields, and perform various tasks, such as program performance, target reconnaissance, agricultural plant protection, animal tracking, fire fighting and disaster relief, and the like.

In some scenarios, the unmanned aerial vehicle is also required to provide more convenient and intelligent services for us, and the unmanned aerial vehicle can be used as an intelligent assistant for us or the target while flying along the target, so that efficient information acquisition and multimedia interaction are provided while hands are liberated. The development of 5G and artificial intelligence technology, especially the application of 5G technology such as information capability, object recognition, image recognition, speech recognition and other deep learning and reinforcement learning, makes electronic equipment more intelligent and possible, for example, the application of reinforcement learning in unmanned vehicles and the practice of deep reinforcement learning-based Google alpha go on weiqi. This also provides powerful support for the intelligent, efficient information acquisition or multimedia interaction of unmanned aerial vehicle.

Disclosure of Invention

The embodiment of the application provides an unmanned aerial vehicle device serving as an intelligent assistant.

In a first aspect, an embodiment of the present application provides an unmanned aerial vehicle device as an intelligent assistant, including:

a flight control unit configured to follow the target object and/or to perform patrol flight related to the target object based on a preset route and/or preset rules and/or a preset model;

and the multimedia interaction unit is configured to perform multimedia interaction with the target object.

In some embodiments, the apparatus further comprises a networked communication unit configured to implement one or any combination of the following features:

networking with at least one third party device;

the method includes communicating with at least one third party device, transmitting data to and/or receiving data from the third party device.

In some embodiments, multimedia interaction with the target includes one or any combination of the following features:

performing multimedia interaction directly with a target object;

multimedia interaction with the target object is performed through at least one third-party device.

In some embodiments, multimedia interaction with the target object includes:

and pushing or playing information to the target object based on the preset rule and/or the preset model.

In some embodiments, the multimedia interaction comprises the steps of:

acquiring first data, the first data comprising at least one of sound, images, 3D spatial data, sensor data, and data from a third party device;

identifying the collected first data to obtain an identification result;

sending the recognition result to at least one third-party device, and/or sending a notice to the target object in a preset mode based on the recognition result when the recognition result meets a preset condition;

the preset mode comprises one or any combination of the following modes: sound, light, naked eye 3D visualization, image projection screen.

In some embodiments, the multimedia interaction comprises the steps of:

acquiring first information corresponding to a target object;

acquiring second information corresponding to the first information;

acquiring third information corresponding to the second information;

displaying the third information in a mode of image projection, image projection screen, VR, AR or naked eye 3D visualization, and/or playing the third information through sound, and/or carrying out light indication based on the third information;

the first information at least comprises one or any combination of the following information: sound, gestures, expressions, gestures, images, 3D spatial information, information obtained by communicating with a third party.

In some embodiments, the multimedia interaction comprises the steps of:

acquiring first information corresponding to a target object;

acquiring an instruction corresponding to the first information;

acquiring instruction parameters corresponding to the instructions;

and setting corresponding equipment or software according to the instruction and/or the instruction parameter, and/or communicating with the corresponding equipment according to the instruction and sending the instruction and/or the instruction parameter to the equipment, and then pushing the acquired response information of the equipment to the target object.

In some embodiments, the multimedia interaction comprises:

and navigating and/or reminding the target object through multimedia interaction.

In some embodiments, the multimedia interaction with the multimedia interaction target includes one or any combination of the following features:

directly performing multi-channel multimedia interaction with a multimedia interaction target;

and performing multi-channel multimedia interaction with the multimedia interaction target object through at least one third-party device.

In a second aspect, an embodiment of the present application provides an unmanned aerial vehicle, including the apparatus of any embodiment in the control method.

The unmanned aerial vehicle device serving as the intelligent assistant provided by the embodiment of the application flies along with the target object, and/or performs multimedia interaction with the target object while performing patrol flight related to the target object based on a preset route, a preset rule and/or a preset model. Therefore, the device which enables the unmanned aerial vehicle in flight to serve as an intelligent assistant of a target object or a user is realized, and the user can realize efficient information acquisition and multimedia interaction while liberating two hands.

Drawings

Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:

FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;

FIG. 2 is a flow diagram of one embodiment of an unmanned aerial device as an intelligent assistant according to the present application;

FIG. 3 is a schematic diagram of an application scenario of an unmanned aerial vehicle device as an intelligent assistant according to the present application;

FIG. 4 is a flow diagram of yet another embodiment of an unmanned aerial device as an intelligent assistant according to the present application;

FIG. 5 is a flow diagram of yet another embodiment of an unmanned aerial device as an intelligent assistant according to the present application;

FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application;

FIG. 7 is a flow diagram of yet another embodiment of an unmanned aerial device as an intelligent assistant according to the present application.

Detailed Description

The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.

It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.

It should be noted that the term "and/or" is only one kind of association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.

It should be noted that the term "preset" is used herein to mean both preset and pre-trained. Generally, the preset model refers to a pre-trained model, the preset route refers to a preset route, and the preset rule refers to a preset rule.

FIG. 1 illustrates an exemplary system architecture 100 to which an embodiment of an unmanned aerial device as an intelligent assistant of the present application may be applied.

As shown in fig. 1, system architecture 100 may include terminal device 101, network 102, and server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types of wireless communication links such as laser/microwave/RF. The terminal device 101 may be various aircrafts or flying devices such as a drone, a controllable airship or a balloon, or various controllable levitation devices such as a magnetic levitation device.

The terminal device 101 may be installed with radar (e.g., infrared laser radar), voice device (e.g., microphone, speaker, etc.), image device (e.g., display screen, camera, projector, projection screen device, AR/VR device, naked eye 3D visualization device such as laser imaging, etc.), text input application, spatial object recognition application, image object recognition application, voice recognition application, etc. A user may use terminal device 101 to interact with server 103 over network 102 to receive or transmit information or the like.

The terminal apparatus 101 may be hardware or software. When the terminal device 101 is hardware, it may be various devices with a flight function or a hover function, including but not limited to a drone or the like. When the terminal apparatus 101 is software, it can be installed in the above-described apparatus. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.

The server 103 may be a server providing various services, for example, a space object recognition server that analyzes and recognizes three-dimensional space data transmitted from the terminal apparatus 101 and generates a tag, a feature tag, a presence state, or the like corresponding to a target object or a feature of the target object. The space object recognition server may analyze the acquired three-dimensional space data, and determine an identifier or a presence state corresponding to the target object. And an information search server for performing information inquiry and acquisition on an information inquiry request sent by the terminal device 101, for example. The information search server can analyze and process the information query request and determine a query result corresponding to the information query request.

It should be noted that an unmanned aerial vehicle device as an intelligent assistant provided in the embodiments of the present application is generally executed by the terminal device 101.

It is noted that the terminal device 101 generally acquires corresponding three-dimensional spatial data by using a radar scan, such as an infrared laser radar based on structured light 3D imaging technology or a radar based on TOF technology.

It should be noted that the three-dimensional space data or the information for interaction corresponding to the terminal device 101 or the target object may also be stored locally in the terminal device 101, the terminal device 101 may directly extract local related three-dimensional space data or information for interaction, and the terminal device 101 may also obtain related three-dimensional space data or information for interaction through communication with a third party.

It should be noted that the unmanned aerial vehicle device as an intelligent assistant provided in the embodiment of the present application may also be executed by the server 103, or a part of the unmanned aerial vehicle device may be installed in the server 103, and another part of the unmanned aerial vehicle device may be installed in the terminal device 101.

It should be noted that the server 103 or the terminal device 101 may also locally store information or a preset model for interaction, the server 103 or the terminal device 10 may directly extract a local information preset model for interaction, and the server 103 or the terminal device 10 may also obtain a relevant information preset model for interaction through communication with a third party.

The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module. And is not particularly limited herein.

It should be noted that the model or rule related to the unmanned aerial vehicle device provided by the embodiment of the present application as an intelligent assistant may be stored or run on the server 103, or may be stored or run on the terminal device 101.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

With continued reference to fig. 2, a flow 200 of one embodiment of a drone apparatus as an intelligent assistant according to the present application is shown, a drone apparatus as an intelligent assistant comprising:

a flight control unit 201 configured to follow the target object and/or to perform patrol flights in relation to the target object based on a preset route and/or preset rules and/or a preset model.

In this embodiment, an execution subject (for example, the terminal device 101 in fig. 1) performing multimedia interaction based on the unmanned aerial vehicle may fly along with the target object, or perform patrol flight related to the target object based on a preset route, a preset rule, and/or a preset model, or receive the acquired related flight parameters from the server in a wireless connection manner, and control the flight attitude, speed, and/or acceleration of the unmanned aerial vehicle based on the received flight parameters.

In this embodiment, the target object may be a human or other animal, or may be other objects such as an electronic device.

And a multimedia interaction unit 202 configured to perform multimedia interaction with the object.

In this embodiment, the multimedia interaction may be a voice interaction, such as a voice conversation/question and answer, between an execution subject and a target object based on the multimedia interaction performed by the unmanned aerial vehicle, may be an action taken by the execution subject based on a preset rule after analyzing the acquired sound from the target object, or may be an action taken by the execution subject based on a preset rule after receiving information such as a voice/gesture of the target object, and/or may be an action taken by the execution subject and/or a corresponding information sent to the target object through an image. Here, various input/output means may be combined as necessary, for example, a voice question for a target object and a voice answer for a drone, a voice question for a target object and an image answer for a drone, a voice question for a target object and a voice and image answer for a drone, a gesture question for a target object and a voice and/or image answer for a drone, a voice and/or gesture command for a target object and a program response for a drone execution, and the like. Here, the sound means of the multimedia interaction includes voice.

It is noted that the names of these units do not in some cases constitute a limitation of the unit itself, e.g. a multimedia interaction unit may also be described as a "unit configured for multimedia interaction with an object".

In some embodiments, a networked communication unit is also included, configured to implement one or any combination of the following features:

networking with at least one third party device;

the method includes communicating with at least one third party device, transmitting data to and/or receiving data from the third party device.

With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of a drone device as an intelligent assistant according to the present embodiment. In the application scenario of fig. 3, drone 301 receives a request from target 304 for "how to make steamed weever", after which drone 301 inputs the request to server 302. After receiving the request, the server 302 may perform information retrieval through the information source or the preset model and/or the preset rule 303 to obtain a retrieval result. Then, the server 302 sends the obtained search result to the drone 301, and the drone 301 presents the search result in a manner of image projection and sound playing so as to facilitate the target object 304 to receive.

According to the unmanned aerial vehicle device serving as the intelligent assistant, when the unmanned aerial vehicle device flies along with the target object, and/or when patrol flight related to the target object is carried out based on a preset route, a preset rule and/or a preset model, multimedia interaction is carried out on the unmanned aerial vehicle device and the target object so as to meet the information requirement of the target object or a user, complete the task of arrangement of the target object or the user, guarantee the safety of the target object or the user and assist the life of the target object or the user, therefore, the information acquisition efficiency and the life quality of the target object are improved, and the time for the target object to click the handheld device with two hands is reduced.

With further reference to fig. 4, there is shown a flow diagram 400 of fig. 4 for yet another embodiment of an unmanned aerial device as an intelligent assistant according to the present application. The process 400 includes the following steps:

step 401, collecting first data, the first data comprising at least one of sound, image, 3D spatial data, sensor data, and data from a third party device.

In this embodiment, an executing body (for example, the terminal device shown in fig. 1) of the method for multimedia interaction based on the drone may collect environmental sound, sound or voice from a target object through a microphone, collect an image of the target object or an environment through a camera, measure 3D spatial data where a 3D imaging device such as a lidar is located, measure environmental parameters or relevant parameters of the drone through a sensor, and may also communicate with a third-party device to obtain parameters of the target object, for example, obtain measurement/sensor data of a wearable device worn by the target object. Here, the sound includes voice.

In the present embodiment, the first data may be data on temperature, disaster such as fire, posture/gesture/physiological characteristics of the target object, and the like.

Step 402, identifying the collected first data to obtain an identification result.

In this embodiment, an executing subject (for example, the terminal device shown in fig. 1) of the method for performing multimedia interaction based on the unmanned aerial vehicle identifies the collected first data, and obtains an identification result. For example, analyzing and recognizing the acquired picture to obtain the body posture, such as standing or falling, of the target object; analyzing and identifying the acquired picture to obtain a conclusion that suspicious people enter the room; for example, the physiological state of the acquired target object is analyzed to obtain a high probability value of life risk, and the like.

And step 403, sending the identification result to at least one third-party device, and/or sending a notice to the target object in a preset mode based on the identification result when the identification result meets a preset condition.

The preset mode comprises one or any combination of the following modes: sound, light, naked eye 3D visualization, image projection screen.

In this embodiment, an executing body (for example, the terminal device shown in fig. 1) of the method for multimedia interaction based on the unmanned aerial vehicle may send the acquired recognition result to a third party device in a wireless connection manner, or send a notification to a target object in a preset manner, for example, sound, light, naked eye 3D visualization, image projection, and/or image projection screen, based on the recognition result when the recognition result meets a preset condition.

As can be seen from fig. 4, unlike the embodiment shown in fig. 2, the present embodiment highlights the multimedia interaction step based on the drone. Thereby making the multimedia interaction process more accurate.

With further reference to fig. 5, there is shown a flow chart 500 of fig. 5 for yet another embodiment of an unmanned aerial device as an intelligent assistant according to the present application. The process 500 includes the following steps:

step 501, acquiring first information corresponding to a target object.

The first information at least comprises one or any combination of the following information: sound, gestures, expressions, gestures, images, 3D spatial information, information obtained by communicating with a third party.

In this embodiment, an executing subject (for example, the terminal device shown in fig. 1) of the method for multimedia interaction based on the unmanned aerial vehicle may collect environmental sound, sound or voice from a target object through a microphone, collect an image of the target object or environment through a camera, measure 3D spatial data where a 3D imaging device such as a lidar is located, collect gestures/expressions/gestures through the camera or the 3D imaging device such as the lidar, and may also communicate with a third party device to obtain parameters of the target object, for example, obtain measurement/sensor data of a wearable device worn by the target object.

In this embodiment, the first data may be a voice question from the target object, or may be a gesture command/expression/gesture from the target object.

Step 502, second information corresponding to the first information is obtained.

In this embodiment, an execution main body (for example, the terminal device shown in fig. 1) of the method for performing multimedia interaction based on the unmanned aerial vehicle acquires second information corresponding to the first information according to the acquired first information. For example, a voice question from the target object is subjected to voice recognition to obtain corresponding second information such as characters.

Step 503, third information corresponding to the second information is acquired.

In this embodiment, an execution main body (for example, the terminal device shown in fig. 1) of the method for performing multimedia interaction based on the unmanned aerial vehicle acquires third information corresponding to the second information according to the acquired second information. For example, the information source or database is searched according to the characters obtained by voice recognition of the voice from the target object, and answer information related to the question is obtained, wherein the answer information can be characters/voice/images and the like.

And 504, displaying the third information in a mode of image projection, VR, AR or naked eye 3D visualization, and/or playing the third information through sound, and/or carrying out light indication based on the third information.

In this embodiment, an executing subject (for example, the terminal device shown in fig. 1) of the method for performing multimedia interaction based on the unmanned aerial vehicle displays the third information in a manner of image projection, VR, AR or naked eye 3D visualization, and/or plays the third information by sound, and/or performs light indication based on the third information. For example, the obtained answer is projected onto a wall by means of image projection, or is presented by naked eye 3D visualization such as laser imaging.

As can be seen from fig. 5, unlike the embodiment shown in fig. 2, the present embodiment highlights the multimedia interaction step based on the drone. Thereby making the multimedia interaction process more accurate.

Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use in implementing an electronic device (e.g., the server shown in FIG. 1) of an embodiment of the present application is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.

As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.

The following components are connected to the I/O interface 605: an input portion 606 including a microphone, a touch device, a button, and the like; an output portion 607 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a WIFI card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.

In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a flight control unit 201 and a multimedia interaction unit 202. Where the names of these units do not in some cases constitute a limitation of the unit itself, for example, a multimedia interaction unit may also be described as a "unit configured for multimedia interaction with an object".

With further reference to fig. 7, there is shown a flow diagram 700 of fig. 7 for yet another embodiment of an unmanned aerial device as an intelligent assistant according to the present application. The process 700 includes the following steps:

step 701, acquiring first information corresponding to a target object.

In this embodiment, an executing subject (for example, the terminal device shown in fig. 1) of the method for multimedia interaction based on the unmanned aerial vehicle may collect environmental sound, sound or voice from a target object through a microphone, collect an image of the target object or environment through a camera, measure 3D spatial data where a 3D imaging device is located through a 3D imaging device such as a lidar, collect gestures/expressions/gestures through the camera or the 3D imaging device such as the lidar, and may also communicate with a third party device to obtain parameters of the target object, for example, obtain measurement/sensor data of a wearable device worn by the target object, as the first information.

In this embodiment, the first data may be a voice command from the target object for controlling the home appliance, or may be a gesture command from the target object for calling the drone.

Step 702, acquiring an instruction corresponding to the first information.

In this embodiment, an executing entity (for example, the terminal device shown in fig. 1) of the method for multimedia interaction based on the unmanned aerial vehicle obtains a corresponding instruction from the first information, for example, adjusts the temperature of the air conditioner in the bedroom.

Step 703, obtaining instruction parameters corresponding to the instruction.

In this embodiment, an executing entity (for example, the terminal device shown in fig. 1) of the method for performing multimedia interaction based on the unmanned aerial vehicle acquires a corresponding instruction parameter from the first information, for example, the bedroom air conditioner is adjusted to 25 degrees celsius, or the instruction parameter is obtained by querying the parameter library according to the acquired instruction, where the instruction parameter may be null.

And 704, setting corresponding equipment or software according to the instruction and/or the instruction parameter, and/or communicating with the corresponding equipment according to the instruction and sending the instruction and/or the instruction parameter to the equipment, and then pushing the acquired response information of the equipment to the target object.

In this embodiment, an execution main body (for example, the terminal device shown in fig. 1) of the method for performing multimedia interaction based on the unmanned aerial vehicle sets corresponding devices or software according to the instruction and/or the instruction parameter, and/or communicates with the corresponding devices according to the instruction and sends the instruction and/or the instruction parameter to the devices, and then pushes the acquired response information of the devices to the target object. For example, a command for adjusting the temperature of the air conditioner to 25 degrees celsius is sent to the corresponding air conditioner. In some embodiments, it may also be a parameter to set an alarm clock.

As can be seen from fig. 7, unlike the embodiment shown in fig. 2, the present embodiment highlights the multimedia interaction step based on the drone. Thereby making the multimedia interaction process more accurate.

The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种作为智能助手的无人机装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!