Interaction method, device and medium

文档序号:1215831 发布日期:2020-09-04 浏览:22次 中文

阅读说明:本技术 交互方法、装置及介质 (Interaction method, device and medium ) 是由 李响 于 2020-06-08 设计创作,主要内容包括:本公开是关于一种交互方法、装置及介质。所述方法包括:接收双语交互指令,进入双语交互模式;接收来自用户的第一语音消息,基于所述第一语音消息确定第一语言;基于所述第一语音消息,生成第二语言回复消息,所述第二语言不同于所述第一语言;将所述第二语言回复消息发送给所述用户。采用这种方法,可在日常生活中和搭载语音助手的手机或音箱等设备进行交互,利用这种双语语音助手,可以为用户营造一个逼真的双语交流场景,从而可以辅助用户进行外语学习。(The present disclosure relates to an interaction method, apparatus, and medium. The method comprises the following steps: receiving a bilingual interaction instruction, and entering a bilingual interaction mode; receiving a first voice message from a user, and determining a first language based on the first voice message; generating a second language reply message based on the first voice message, the second language being different from the first language; and sending the reply message in the second language to the user. By adopting the method, the interaction with the mobile phone or the loudspeaker box and other equipment carrying the voice assistant can be carried out in daily life, and a vivid bilingual communication scene can be created for the user by utilizing the bilingual voice assistant, so that the foreign language learning of the user can be assisted.)

1. An interactive method, characterized in that the method comprises:

receiving a bilingual interaction instruction, and entering a bilingual interaction mode;

receiving a first voice message from a user, and determining a first language based on the first voice message;

generating a second language reply message based on the first voice message, the second language being different from the first language;

and sending the reply message in the second language to the user.

2. The method of claim 1, wherein the method further comprises:

acquiring the setting information of the user about the second language;

determining the second language based on the setting information.

3. The method of claim 2, wherein said obtaining the user's setting information about the second language comprises:

and when the voice setting instruction of the user for the second language is determined to be received, acquiring the setting information of the second language based on the voice setting instruction.

4. The method of claim 1, wherein said generating a second language reply message based on said first voice message comprises:

generating a first language reply message based on the first voice message;

generating the second language reply message based on the first language reply message.

5. The method of claim 1, wherein said generating a second language reply message based on said first voice message comprises:

translating the first voice message to generate a second language message;

generating the second language reply message based on the second language message.

6. The method of claim 1, wherein said sending said second language reply message to said user comprises:

sending the reply message in the second language to the user in a voice form; and/or

And sending the reply message in the second language to the user in a text form.

7. The method of claim 6, wherein said sending said second language reply message to said user in voice form comprises:

generating the second language reply message in a voice form based on the second language reply message in a text form;

and sending the second language reply message in a voice form to the user.

8. An interactive apparatus, characterized in that the apparatus comprises:

the mode setting module is set to enter a bilingual interaction mode after receiving a bilingual interaction instruction;

a receiving module configured to receive a first voice message from a user, determine a first language based on the first voice message;

a generating module configured to generate a reply message in a second language based on the first voice message, the second language being different from the first language;

a sending module configured to send the reply message in the second language to the user.

9. The apparatus of claim 8, wherein the apparatus further comprises a second language determination module configured to:

acquiring the setting information of the user about the second language;

determining the second language based on the setting information.

10. The apparatus of claim 9, wherein the second language determination module is further configured to:

and when the voice setting instruction of the user for the second language is determined to be received, acquiring the setting information of the second language based on the voice setting instruction.

11. The apparatus of claim 8, wherein the generation module is further configured to:

generating a first language reply message based on the first voice message;

generating the second language reply message based on the first language reply message.

12. The apparatus of claim 8, wherein the generation module is further configured to:

translating the first voice message to generate a second language message;

generating the second language reply message based on the second language message.

13. The apparatus of claim 8, wherein the sending module is further configured to:

sending the reply message in the second language to the user in a voice form; and/or

And sending the reply message in the second language to the user in a text form.

14. The apparatus of claim 13, wherein the sending module is further configured to:

generating the second language reply message in a voice form based on the second language reply message in a text form;

and sending the second language reply message in a voice form to the user.

15. An interactive apparatus, comprising:

a processor;

a memory for storing processor-executable instructions;

wherein the processor is configured to implement the following steps when executing the executable instructions:

receiving a bilingual interaction instruction, and entering a bilingual interaction mode;

receiving a first voice message from a user, and determining a first language based on the first voice message;

generating a second language reply message based on the first voice message, the second language being different from the first language;

and sending the reply message in the second language to the user.

16. A non-transitory computer readable storage medium in which instructions, when executed by a processor of an apparatus, enable the apparatus to perform a method of interaction, the method comprising:

receiving a bilingual interaction instruction, and entering a bilingual interaction mode;

receiving a first voice message from a user, and determining a first language based on the first voice message;

generating a second language reply message based on the first voice message, the second language being different from the first language;

and sending the reply message in the second language to the user.

Technical Field

The present disclosure relates to the field of electronic devices, and in particular, to an interaction method, apparatus, and medium.

Background

With the increasingly wide application of the intelligent terminal, the user has more and more requirements on the interactive function of the intelligent terminal. To facilitate interaction with the user, most intelligent terminals are provided with an intelligent voice assistant. The intelligent voice assistant has become an important function of the smart phone and the sound box, and almost all major internet companies develop voice assistants oriented to different scenes, so that users can be helped to acquire information through more convenient interaction and execute the operation on home equipment.

Currently, speech assistants are in a monolingual mode, for example, a user invokes the speech assistant in Chinese and makes a request, and the speech assistant returns a Chinese answer in text or speech form through speech recognition and natural language understanding.

Considering that there are a large number of users worldwide who learn foreign languages, the hearing level is an indispensable ability in foreign language learning. However, it is difficult to effectively exercise the hearing ability in daily life of the learner, and a good bilingual communication environment is also lacking, especially in resource-poor regions, which limits the learner to effectively improve the learning ability of english.

Disclosure of Invention

To overcome the problems in the related art, the present disclosure provides an interaction method, apparatus, and medium.

According to a first aspect of embodiments of the present disclosure, there is provided an interaction method, the method including:

receiving a bilingual interaction instruction, and entering a bilingual interaction mode;

receiving a first voice message from a user, and determining a first language based on the first voice message;

generating a second language reply message based on the first voice message, the second language being different from the first language;

and sending the reply message in the second language to the user.

Wherein the method further comprises:

acquiring the setting information of the user about the second language;

determining the second language based on the setting information.

Wherein the acquiring of the setting information of the user about the second language includes:

and when the voice setting instruction of the user for the second language is determined to be received, acquiring the setting information of the second language based on the voice setting instruction.

Wherein the generating a second language reply message based on the first voice message comprises:

generating a first language reply message based on the first voice message;

generating the second language reply message based on the first language reply message.

Wherein the generating a second language reply message based on the first voice message comprises:

translating the first voice message to generate a second language message;

generating the second language reply message based on the second language message.

Wherein said sending said reply message in said second language to said user comprises:

sending the reply message in the second language to the user in a voice form; and/or

And sending the reply message in the second language to the user in a text form.

Wherein said sending said reply message in said second language to said user in voice form comprises:

generating the second language reply message in a voice form based on the second language reply message in a text form;

and sending the second language reply message in a voice form to the user.

According to a second aspect of embodiments of the present disclosure, there is provided an interaction apparatus, the apparatus comprising:

the mode setting module is set to enter a bilingual interaction mode after receiving a bilingual interaction instruction;

a receiving module configured to receive a first voice message from a user, determine a first language based on the first voice message;

a generating module configured to generate a reply message in a second language based on the first voice message, the second language being different from the first language;

a sending module configured to send the reply message in the second language to the user.

Wherein the apparatus further comprises a second language determination module arranged to:

acquiring the setting information of the user about the second language;

determining the second language based on the setting information.

Wherein the second language determination module is further configured to:

and when the voice setting instruction of the user for the second language is determined to be received, acquiring the setting information of the second language based on the voice setting instruction.

Wherein the generation module is further configured to:

generating a first language reply message based on the first voice message;

generating the second language reply message based on the first language reply message.

Wherein the generation module is further configured to:

translating the first voice message to generate a second language message;

generating the second language reply message based on the second language message.

Wherein the sending module is further configured to:

sending the reply message in the second language to the user in a voice form; and/or

And sending the reply message in the second language to the user in a text form.

Wherein the sending module is further configured to:

generating the second language reply message in a voice form based on the second language reply message in a text form;

and sending the second language reply message in a voice form to the user.

According to a third aspect of the embodiments of the present disclosure, there is provided an interaction apparatus, including:

a processor;

a memory for storing processor-executable instructions;

wherein the processor is configured to implement the following steps when executing the executable instructions:

receiving a bilingual interaction instruction, and entering a bilingual interaction mode;

receiving a first voice message from a user, and determining a first language based on the first voice message;

generating a second language reply message based on the first voice message, the second language being different from the first language;

and sending the reply message in the second language to the user.

According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having instructions which, when executed by a processor of an apparatus, enable the apparatus to perform a method of interaction, the method comprising:

receiving a bilingual interaction instruction, and entering a bilingual interaction mode;

receiving a first voice message from a user, and determining a first language based on the first voice message;

generating a second language reply message based on the first voice message, the second language being different from the first language;

and sending the reply message in the second language to the user.

By adopting the method, the bilingual interaction instruction is received, the bilingual interaction mode is entered, then the first voice message from the user is received, the first language is determined, the second language reply message is generated based on the first voice message, and then the second language reply message is sent to the user. In this method, the second language is different from the first language, i.e., the language of the message received by the voice assistant is different from the language of the reply message sent. This is to enable the user to learn the second language by listening or viewing a response message of the second language even when the user does not have a good idea of the second language, that is, cannot ask a question in the second language. The user with the hearing learning requirement can set the reply language of the bilingual intelligent voice assistant as the target language which the user needs to learn, namely, the user can interact with equipment such as a mobile phone or a sound box carrying the voice assistant in daily life, and a vivid bilingual communication scene can be created for the user by utilizing the bilingual voice assistant, so that the user can be assisted in learning foreign languages.

By adopting the method, the user can learn the foreign language by the electronic equipment conveniently, quickly and at low cost.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.

FIG. 1 is a flow chart illustrating an interaction method according to an exemplary embodiment.

FIG. 2 is a flow chart illustrating an interaction method according to an example embodiment.

FIG. 3 is a block diagram illustrating an interaction device, according to an example embodiment.

FIG. 4 is a block diagram illustrating an apparatus in accordance with an example embodiment.

FIG. 5 is a block diagram illustrating an apparatus in accordance with an example embodiment.

Detailed Description

Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.

Currently, users are increasingly using voice assistants on smart terminals. The speech assistant is typically in a monolingual mode, for example, the user invokes the speech assistant in Chinese and makes a request, and the speech assistant responds in Chinese.

However, worldwide, there are a large number of users who learn foreign languages. Since the foreign language level is limited, the users can only ask questions in the familiar language, and hope the voice assistant to answer in the foreign language, so as to improve the foreign language level. For example, since the hearing level is an indispensable ability in foreign language learning, the user wants to build a bilingual communication environment in daily life to exercise the hearing ability. This requirement is particularly acute for users in resource-poor regions.

The present disclosure provides an interaction method in which a bilingual interaction instruction is received, a bilingual interaction mode is entered, then a first voice message from a user is received and a first language is determined, a second language reply message is generated based on the first voice message, and then the second language reply message is sent to the user. In this method, the second language is different from the first language, i.e., the language of the message received by the voice assistant is different from the language of the reply message sent. This is to enable the user to learn the second language by listening or viewing a response message of the second language even when the user does not have a good idea of the second language, that is, cannot ask a question in the second language. The user with the hearing learning requirement can set the reply language of the bilingual intelligent voice assistant as the target language which the user needs to learn, namely, the user can interact with equipment such as a mobile phone or a sound box carrying the voice assistant in daily life, and a vivid bilingual communication scene can be created for the user by utilizing the bilingual voice assistant, so that the user can be assisted in learning foreign languages.

It should be noted that the above interaction method can be applied to various intelligent terminals, such as mobile phones, PADs, wearable devices; and can also be used for various multimedia devices, such as sound boxes, interactive learning devices and the like. These intelligent terminals or multimedia devices can interact with users through text or voice.

FIG. 1 is a flow chart illustrating an interaction method, as shown in FIG. 1, according to an exemplary embodiment, the method including the steps of:

step 101, receiving a bilingual interaction instruction, and entering a bilingual interaction mode;

step 102, receiving a first voice message from a user, and determining a first language based on the first voice message;

103, generating a reply message in a second language based on the first voice message, wherein the second language is different from the first language;

and 104, sending the reply message in the second language to the user.

The method is implemented on a mobile phone, for example. The user sends a first language message to the mobile phone to interact with the mobile phone.

In step 101, a bilingual interaction instruction from a user is received, and a bilingual interaction mode is entered. It should be noted that if no bilingual interactive instruction is received, the system operates in the bilingual interactive mode.

In step 102, after the user sends out the first voice message, for example, the user asks "how much the weather is today" and the mobile phone receives the first voice message. A first language is determined based on the language used for the first voice message.

In step 103, after receiving the first voice message, a second language reply message, i.e. an answer to the question, for example, "It is close", is generated based on the first voice message. Where the first language is different from the second language. This is because the user may need to learn the second language with the help of the cell phone. The second language level at the user is insufficient to interact with the handset using the second language. Thus, the second language can only be learned by saying or writing first a familiar first language and then by listening or looking at a reply message in the second language. Where the reply message is a dialogue-like reply to the first voice message.

In step 103, after the mobile phone generates the reply message in the second language, the message is sent to the user in a voice or text manner.

By adopting the method, the user can interact with the intelligent terminal or the multimedia equipment carrying the voice assistant in daily life, and can conveniently and quickly learn foreign languages with lower cost.

In an alternative embodiment, the method further comprises:

acquiring the setting information of the user about the second language;

determining the second language based on the setting information.

Here, acquiring the setting information of the user may be achieved in various ways. For example, setting information stored in a mobile phone by a user may be acquired; the method can receive information input by a user through voice or characters, and extract setting information from the information; the setting information can be acquired by sensing the visual content set by the user through the camera; the setting information may be obtained from a connected smart terminal (when the method is implemented on a smart speaker or the like). The setting information here determines the second language set by the user.

In an optional embodiment, the obtaining the setting information of the user about the second language includes:

and when the voice setting instruction of the user for the second language is determined to be received, acquiring the setting information of the second language based on the voice setting instruction.

It should be noted that, when receiving the voice setting instruction of the user for the second language, the user does not acquire the setting information stored in the mobile phone, but preferentially adopts the voice setting instruction sent by the user this time.

In an alternative embodiment, said generating a second language reply message based on said first voice message comprises:

generating a first language reply message based on the first voice message;

generating the second language reply message based on the first language reply message.

Upon generating a second language reply message based on the first voice message,

upon receiving a first voice message, a first language reply message is generated for the first voice message. For example, the first voice message is "how it is today's weather" and the first language reply message is "it is today's rain". The above-mentioned process of generating the first language reply message based on the first voice message may be generated by using a technique known to those skilled in the art, and will not be described herein again.

The first language reply message is then translated into a second language reply message using machine translation techniques. For example, a first language reply message is translated as "It will rain". The machine translation techniques herein employ techniques known to those skilled in the art and will not be described in detail herein.

In an alternative embodiment, said generating a second language reply message based on said first voice message comprises:

translating the first voice message to generate a second language message;

generating the second language reply message based on the second language message.

Upon receiving the first voice message, the first voice message is translated to a second language message using machine translation techniques. For example, the first voice message is "how the weather is today" and the second voice message is "What's the weather likeketoday". The machine translation techniques herein employ techniques known to those skilled in the art and will not be described in detail herein.

A second language reply message is then generated for the second language message based on the second language message. For example, the second language reply message is "It will rain" message. The above process of generating the second language reply message based on the second language message may be generated by using techniques known to those skilled in the art, and will not be described herein again.

As can be seen from the above two embodiments, when generating the reply message in the second language based on the first voice message, the reply message in the same language may be generated first and then translated, or the reply message in another language may be translated first and then generated. In this way, in the case of receiving a message in one language, a reply message in another language can be generated, thereby achieving the purpose of learning a foreign language for the user.

In an alternative embodiment, said sending said second language reply message to said user comprises:

sending the reply message in the second language to the user in a voice form; and/or

And sending the reply message in the second language to the user in a text form.

After the mobile phone generates the reply message in the second language, the reply message can be sent in a text form or a voice form. Of course, the corresponding text reply message may be displayed on the display screen at the same time as the voice reply message is sent. When the method is implemented on a device that is not provided with a display screen, such as a smart speaker, then the second language reply message is sent only in the form of speech.

In an alternative embodiment, said sending said second language reply message to said user in the form of speech comprises:

generating the second language reply message in a voice form based on the second language reply message in a text form;

and sending the second language reply message in a voice form to the user.

When the reply message in the second language in the voice form is sent to the user, the reply message in the voice form needs to be generated based on the reply message in the text form by using a voice synthesis technology. The speech synthesis techniques herein employ techniques known to those skilled in the art and will not be described in detail herein.

Specific embodiments according to the present disclosure are described below in conjunction with specific application scenarios. The method in this embodiment is implemented on a mobile phone, for example, where the first language is chinese and the second language is english. As shown in fig. 2, the method in this embodiment comprises the steps of:

step 201, receiving a bilingual interaction instruction from a user, and entering a bilingual interaction mode.

Step 202, acquiring the setting information of the user about the second language, and determining that the second language is english.

Step 203, receiving a first voice message of the user, and recognizing the voice of the user as the text 'how to weather today' through a voice recognition technology.

Step 204, generating a text reply message "raining today" for the user's question.

Step 205, translate the word "raining today" into the English word "It will rain day.

Step 206, generating corresponding voice by the text "It will rain course today" through voice synthesis technology.

Step 207, playing the generated voice reply message, and displaying "It will raitoday" on the screen of the mobile phone.

It should be noted that, if the above method is implemented on a smart speaker without a display screen, the smart speaker outputs the response message only in the form of voice.

The present disclosure also provides an interaction apparatus, as shown in fig. 3, the apparatus including:

the mode setting module 301 is configured to enter a bilingual interaction mode after receiving a bilingual interaction instruction;

a receiving module 302 configured to receive a first voice message from a user, determine a first language based on the first voice message;

a generating module 303 configured to generate a reply message in a second language based on the first voice message, the second language being different from the first language;

a sending module 304 configured to send the reply message in the second language to the user.

In an alternative embodiment, the apparatus further comprises a second language determination module configured to:

acquiring the setting information of the user about the second language;

determining the second language based on the setting information.

In an alternative embodiment, the second language determination module is further configured to:

and when the voice setting instruction of the user for the second language is determined to be received, acquiring the setting information of the second language based on the voice setting instruction.

In an alternative embodiment, the generating module 303 is further configured to:

generating a first language reply message based on the first voice message;

generating the second language reply message based on the first language reply message.

In an alternative embodiment, the generating module 303 is further configured to:

translating the first voice message to generate a second language message;

generating the second language reply message based on the second language message.

In an alternative embodiment, the sending module 304 is further configured to:

sending the reply message in the second language to the user in a voice form; and/or

And sending the reply message in the second language to the user in a text form.

In an alternative embodiment, the sending module 304 is further configured to:

generating the second language reply message in a voice form based on the second language reply message in a text form;

and sending the second language reply message in a voice form to the user.

With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

By adopting the method, the bilingual interaction instruction is received, the bilingual interaction mode is entered, then the first voice message from the user is received, the first language is determined, the second language reply message is generated based on the first voice message, and then the second language reply message is sent to the user. In this method, the second language is different from the first language, i.e., the language of the message received by the voice assistant is different from the language of the reply message sent. This is to enable the user to learn the second language by listening or viewing a response message of the second language even when the user does not have a good idea of the second language, that is, cannot ask a question in the second language. The user with the hearing learning requirement can set the reply language of the bilingual intelligent voice assistant as the target language which the user needs to learn, namely, the user can interact with equipment such as a mobile phone or a sound box carrying the voice assistant in daily life, and a vivid bilingual communication scene can be created for the user by utilizing the bilingual voice assistant, so that the user can be assisted in learning foreign languages.

Fig. 4 is a block diagram illustrating an interaction device 400, according to an example embodiment.

Referring to fig. 4, the apparatus 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, an audio component 410, an interface for input/output (I/O) 412, a sensor component 414, and a communication component 416.

The processing component 402 generally controls overall operation of the apparatus 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.

The memory 404 is configured to store various types of data to support operations at the device 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.

Power components 406 provide power to the various components of device 400. Power components 406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for apparatus 400.

The multimedia component 408 includes a screen that provides an output interface between the device 400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 400 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.

The audio component 410 is configured to output and/or input audio signals. For example, audio component 410 includes a Microphone (MIC) configured to receive external audio signals when apparatus 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.

The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.

The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the apparatus 400. For example, the sensor component 414 can detect the open/closed state of the device 400, the relative positioning of components, such as a display and keypad of the apparatus 400, the sensor component 414 can also detect a change in the position of the apparatus 400 or a component of the apparatus 400, the presence or absence of user contact with the apparatus 400, orientation or acceleration/deceleration of the apparatus 400, and a change in the temperature of the apparatus 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.

The communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.

In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.

In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the apparatus 400 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

A non-transitory computer readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform a method of interaction, the method comprising: receiving a bilingual interaction instruction, and entering a bilingual interaction mode; receiving a first voice message from a user, and determining a first language based on the first voice message; generating a second language reply message based on the first voice message, the second language being different from the first language; and sending the reply message in the second language to the user.

Fig. 5 is a block diagram illustrating an interaction device 500, according to an example embodiment. For example, the apparatus 500 may be provided as a server. Referring to fig. 5, the apparatus 500 includes a processing component 522 that further includes one or more processors and memory resources, represented by memory 532, for storing instructions, such as applications, that are executable by the processing component 522. The application programs stored in memory 532 may include one or more modules that each correspond to a set of instructions. Further, the processing component 522 is configured to execute instructions to perform the above-described method: receiving a bilingual interaction instruction, and entering a bilingual interaction mode; receiving a first voice message from a user, and determining a first language based on the first voice message; generating a second language reply message based on the first voice message, the second language being different from the first language; and sending the reply message in the second language to the user.

The apparatus 500 may also include a power component 526 configured to perform power management of the apparatus 500, a wired or wireless network interface 550 configured to connect the apparatus 500 to a network, and an input/output (I/O) interface 558. The apparatus 500 may operate based on an operating system stored in the memory 532, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.

Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于哈希词典的接处警文本物品名称提取方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!