Voice interaction method and device and robot

文档序号:1393442 发布日期:2020-02-28 浏览:24次 中文

阅读说明:本技术 语音交互方法、装置及机器人 (Voice interaction method and device and robot ) 是由 熊友军 张军健 于 2018-08-02 设计创作,主要内容包括:本发明适用于智能机器人技术领域,本发明提供的一种语音交互方法、装置及机器人,所述方法包括:根据用户位置增强麦克风阵列中对应的麦克风,接收该麦克风发送用户的语音指令,并确定所述语音指令的语音信号噪声值,根据该语音信号噪声值确定该语音指令的分发模式,并根据该分发模式将该语音指令发送至对应的应用程序处理。由于本发明实施例不需要唤醒词,就能直接与机器人进行语音交互,语音交互方式简便、用户体验好。(The invention is suitable for the technical field of intelligent robots, and provides a voice interaction method, a voice interaction device and a robot, wherein the method comprises the following steps: enhancing a corresponding microphone in a microphone array according to the position of a user, receiving a voice instruction sent by the microphone to the user, determining a voice signal noise value of the voice instruction, determining a distribution mode of the voice instruction according to the voice signal noise value, and sending the voice instruction to a corresponding application program for processing according to the distribution mode. According to the embodiment of the invention, voice interaction can be directly carried out with the robot without a wake-up word, so that the voice interaction mode is simple and convenient, and the user experience is good.)

1. A method of voice interaction, comprising:

enhancing a corresponding microphone in the microphone array according to the user position;

receiving a voice instruction of the user sent by the microphone, and determining a voice signal noise value of the voice instruction;

and determining a distribution mode of the voice instruction according to the noise value of the voice signal, and distributing the voice instruction to a corresponding application program for processing according to the distribution mode.

2. The voice interaction method according to claim 1, wherein the determining a distribution mode of the voice instruction according to the voice signal noise value and distributing the voice instruction to a corresponding application according to the distribution mode comprises:

if the noise value of the voice signal is larger than a preset threshold value, determining that the distribution mode of the voice instruction is a noisy mode;

identifying the instruction semantics of the voice instruction in the noisy mode;

if the voice broadcasting is not carried out currently, the voice instruction is distributed to an application program corresponding to the instruction semantic for processing;

and if the current voice broadcasting is judged, ending the voice instruction distribution process.

3. The voice interaction method according to claim 1 or 2, wherein the determining a distribution mode of the voice instruction according to the voice signal noise value and distributing the voice instruction to a corresponding application program according to the distribution mode comprises:

if the noise value of the voice signal is smaller than or equal to the preset threshold value, determining that the distribution mode of the voice instruction is a quiet mode;

in the quiet mode, recognizing instruction semantics of the voice instruction;

if the voice broadcasting is not carried out currently, the voice instruction is distributed to an application program corresponding to the instruction semantic for processing;

and if the current voice broadcasting is judged, ending the voice broadcasting, and distributing the voice command to the application program corresponding to the command semantics for processing.

4. The method of voice interaction of claim 1, wherein the enhancing corresponding microphones of the array of microphones based on user location comprises:

acquiring position parameters of the user, wherein the position parameters comprise azimuth angles and distance values;

determining a first preset number of microphones in the microphone array within a preset angle range of the azimuth according to the azimuth;

selecting a second preset number of microphones from the first preset number of microphones according to the distance value;

enhancing the second preset number of microphones.

5. The voice interaction method of claim 2, wherein the distributing the voice instruction to an application program process corresponding to the instruction semantics comprises:

converting the voice command into text information;

extracting characteristic keywords in the text information, and inquiring a pre-stored database to obtain an application program interface corresponding to the characteristic keywords;

and sending the text information to an application program corresponding to the application interface for processing.

6. A voice interaction apparatus, comprising:

the microphone enhancement module is used for enhancing the corresponding microphone in the microphone array according to the position of the user;

the voice signal noise value determining module is used for receiving the voice instruction of the user sent by the microphone and determining the voice signal noise value of the voice instruction;

and the voice instruction distribution module is used for determining a distribution mode of the voice instruction according to the voice signal noise value and distributing the voice instruction to a corresponding application program for processing according to the distribution mode.

7. The voice interaction device of claim 6, wherein the voice instruction distribution module comprises:

the distribution mode determining unit is used for determining that the distribution mode of the voice instruction is a noisy mode if the noise value of the voice signal is greater than a preset threshold value;

the instruction semantic recognition unit is used for recognizing the instruction semantics of the voice instruction in the noisy mode;

and the voice instruction distribution unit is used for distributing the voice instruction to an application program corresponding to the instruction semantic for processing if the voice broadcasting is not currently performed, and ending the voice instruction distribution process if the voice broadcasting is currently performed.

8. The voice interaction apparatus of claim 6 or 7,

the distribution mode determining unit is further configured to determine that the distribution mode of the voice instruction is a quiet mode if the noise value of the voice signal is less than or equal to the preset threshold;

the instruction semantic recognition unit is further used for recognizing the instruction semantics of the voice instruction in the quiet mode;

the voice instruction distribution unit is further configured to distribute the voice instruction to the application program corresponding to the instruction semantics for processing if it is determined that voice broadcasting is not currently performed, and end voice broadcasting if it is determined that voice broadcasting is currently performed, and distribute the voice instruction to the application program corresponding to the instruction semantics for processing.

9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor realizes the steps of the voice interaction method according to any of claims 1 to 5 when executing the computer program.

10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for voice interaction according to any one of claims 1 to 5.

Technical Field

The invention belongs to the technical field of intelligent robots, and particularly relates to a voice interaction method, a voice interaction device and a robot.

Background

With the increasing maturity of the related technologies of artificial intelligence, people's lives begin to be intelligent, and various intelligent devices gradually enter people's daily lives, such as intelligent robots. The voice is used as a mainstream interaction mode of the human and the intelligent robot, and the advantage of convenience and quickness is seen with the eyes.

At present, a traditional voice interaction mode for a robot is mainly to wake up through a wake-up word, that is, a user speaks a specific wake-up word, and when the robot receives and recognizes the wake-up word, the robot is woken up and enters a human-computer interaction working mode. However, in this way of waking up by using the wake-up word, the user must remember a specific wake-up word in advance, otherwise, the user cannot wake up the word, which results in a cumbersome wake-up process and poor user experience.

Disclosure of Invention

In view of this, embodiments of the present invention provide a voice interaction method, a voice interaction device, and a robot, which can solve the problems in the prior art that when voice interaction needs to be awakened through an awakening word, a user must remember a specific awakening word in advance, otherwise, the user cannot awaken the word, so that an awakening process is complicated, and user experience is poor.

In a first aspect of embodiments of the present invention, a method for voice interaction is provided,

enhancing a corresponding microphone in the microphone array according to the user position;

receiving a voice instruction of the user sent by the microphone, and determining a voice signal noise value of the voice instruction;

and determining a distribution mode of the voice instruction according to the noise value of the voice signal, and distributing the voice instruction to a corresponding application program for processing according to the distribution mode.

In a second aspect of the embodiments of the present invention, a voice interaction apparatus is provided, including:

the microphone enhancement module is used for enhancing the corresponding microphone in the microphone array according to the position of the user;

the voice signal noise value determining module is used for receiving the voice instruction of the user sent by the microphone and determining the voice signal noise value of the voice instruction;

and the voice instruction distribution module is used for determining a distribution mode of the voice instruction according to the voice signal noise value and distributing the voice instruction to a corresponding application program for processing according to the distribution mode.

In a third aspect of the embodiments of the present invention, there is provided a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the above-mentioned voice interaction method when executing the computer program.

In a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is provided, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of the above-mentioned voice interaction method.

Compared with the prior art, the embodiment of the invention has the beneficial effects that: according to the voice interaction method, the voice interaction device and the robot, the corresponding microphone in the microphone array is enhanced according to the position of the user, the voice instruction of the user sent by the microphone is received, the voice signal noise value of the voice instruction is determined, the distribution mode of the voice instruction is determined according to the voice signal noise value, and the voice instruction is sent to the corresponding application program to be processed according to the distribution mode. According to the embodiment of the invention, voice interaction can be directly carried out with the robot without a wake-up word, so that the voice interaction mode is simple and convenient, and the user experience is good.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.

Fig. 1 is a schematic flow chart of a voice interaction method according to an embodiment of the present invention;

fig. 2 is a schematic flowchart of a voice interaction method according to another embodiment of the present invention;

FIG. 3 is a flowchart illustrating a voice interaction method according to yet another embodiment of the present invention;

fig. 4 is a flowchart illustrating a voice interaction method according to another embodiment of the present invention;

FIG. 5 is a schematic diagram of orientation parameters of a user and a robot according to an embodiment of the present invention;

fig. 6 is a block diagram of a voice interaction apparatus according to an embodiment of the present invention;

fig. 7 is a schematic block diagram of a robot according to an embodiment of the present invention.

Detailed Description

In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.

It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.

It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.

As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".

In order to explain the technical means of the present invention, the following description will be given by way of specific examples.

Referring to fig. 1, fig. 1 is a flowchart illustrating a voice interaction method according to an embodiment of the present invention. The method can be applied to robots, and is specifically detailed as follows:

s101: corresponding microphones in the microphone array are enhanced according to the user position.

In this embodiment, the robot may be provided with a 360 ° microphone array including a plurality of microphones. The robot of this embodiment can install 360 all-round lidar on, and lidar real-time supervision whether has the user to get into in the surrounding environment, when the user gets into, generates user's sensing signal to confirm user position according to sensing signal.

In particular, one or more microphones of a microphone array directly in front of the user location may be enhanced.

S102: and receiving a voice command of the user sent by the microphone, and determining a voice signal noise value of the voice command.

In the present embodiment, the voice instruction is an instruction of the user to interact with the robot, for example, the voice instruction may be "how is the weather today? ".

Specifically, a plurality of noise signals in the voice signal are extracted according to the voice signal of the voice command, and a root mean square is obtained for the plurality of noise signals to obtain a voice signal noise value.

S103: and determining a distribution mode of the voice instruction according to the noise value of the voice signal, and distributing the voice instruction to a corresponding application program for processing according to the distribution mode.

In this embodiment, the distribution mode can be divided into a noisy mode and a quiet mode. The noisy mode is that when the robot is performing voice broadcasting, the voice instruction is not distributed, and when the robot is not performing voice broadcasting, the voice instruction is distributed; the quiet mode is that when the robot is carrying out voice broadcasting, the voice broadcasting is stopped and the voice command is distributed, and when the robot is not carrying out voice broadcasting, the voice command is directly distributed.

Specifically, the application program corresponding to the voice instruction may be determined according to the feature keyword in the voice instruction.

As can be seen from the above description, first, a corresponding microphone in the microphone array is enhanced according to the user location, a voice instruction sent by the microphone to the user is received, a voice signal noise value of the voice instruction is determined, a distribution mode of the voice instruction is determined according to the voice signal noise value, and the voice instruction is sent to a corresponding application program according to the distribution mode for processing. According to the embodiment of the invention, voice interaction can be directly carried out with the robot without a wake-up word, so that the voice interaction mode is simple and convenient, and the user experience is good.

Referring to fig. 2, fig. 2 is a flowchart illustrating a voice interaction method according to another embodiment of the present invention. On the basis of the above embodiment, the above step S103 is detailed as follows:

s201: and if the noise value of the voice signal is greater than the preset threshold value, determining that the distribution mode of the voice command is a noisy mode.

In this embodiment, the preset threshold may be set according to the requirement, for example, 20 dB.

S202: and under the noisy mode, recognizing the instruction semantics of the voice instruction.

Specifically, in the noisy mode, a voice signal in the voice command is subjected to denoising processing, and then the voice signal with the noise removed is recognized as text information, so that the command semantics of the voice command is obtained.

S203: and if the voice broadcasting is not carried out at present, the voice instruction is distributed to the application program corresponding to the instruction semantic to be processed.

S204: and if the current voice broadcasting is judged, ending the voice instruction distribution process.

In this embodiment, since the noise value in the noisy mode is large, the voice command may be erroneously acquired by the robot as the voice command when the user does not want to interact with the robot by voice, and therefore, when the robot is performing voice broadcasting, the voice command is not distributed, and only when the robot is not performing voice broadcasting (that is, when the robot is idle), the voice command is distributed.

As can be seen from the above description, when the noise value of the voice command is large and is determined to be in the noisy mode, when the robot is performing voice announcement, the voice command is not distributed, and only when the robot is not performing voice announcement (i.e., is idle), the voice command is distributed, so that the robot is prevented from distributing the erroneously acquired voice command to the corresponding application program for processing.

Referring to fig. 3, fig. 3 is a flowchart illustrating a voice interaction method according to still another embodiment of the present invention. On the basis of the foregoing embodiment, the foregoing step S103 may further include:

s301: and if the noise value of the voice signal is less than or equal to the preset threshold value, determining that the distribution mode of the voice command is a quiet mode.

In this embodiment, the preset threshold may be set according to the requirement, for example, 20 dB.

S302: in the quiet mode, the instruction semantics of the voice instructions are recognized.

Specifically, in the silent mode, the voice signal in the voice command is directly recognized as text information, and the command semantic meaning of the voice command is obtained.

S303: and if the voice broadcasting is not carried out at present, the voice instruction is distributed to the application program corresponding to the instruction semantic to be processed.

S304: and if the current voice broadcasting is judged, ending the voice broadcasting, and distributing the voice instruction to the application program corresponding to the instruction semantics for processing.

In this embodiment, since the noise value in the quiet mode is small, and the voice command is a voice command acquired by the robot when the user wants to interact with the robot by voice, when the robot performs voice broadcasting, the voice broadcasting is ended and the voice command is distributed, and when the robot does not perform voice broadcasting, the voice command is directly distributed.

As can be seen from the above description, when the noise value of the voice command is small and is determined to be in the silent mode, when the robot is performing voice broadcasting, the voice broadcasting is ended and the voice command is distributed, and when the robot is not performing voice broadcasting, the voice command is directly distributed, so that the situation that the robot misses the voice command and does not interact with the user is avoided.

Referring to fig. 4, fig. 4 is a flowchart illustrating a voice interaction method according to another embodiment of the present invention. On the basis of the above embodiment, the above step S101 is detailed as follows:

s401: position parameters of a user are acquired, wherein the position parameters comprise azimuth angles and distance values.

In this embodiment, the azimuth angle is a horizontal angle from a north arrow at a certain point to a target direction line along a clockwise direction. The azimuth in this embodiment refers to the azimuth of the user relative to the robot; the distance value refers to a distance value between the user and the robot. The preset angle range can be set as required.

S402: and determining a first preset number of microphones in the microphone array within a preset angle range of the azimuth according to the azimuth.

Referring to fig. 5, fig. 5 is a schematic diagram of the orientation parameters of the user and the robot according to the embodiment of the present invention, wherein the azimuth angle is 45 °, the preset angle range is [30 °, -30 ° ], and the first preset number is 5.

S403: and selecting a second preset number of microphones from the first preset number of microphones according to the distance value.

In this embodiment, the second preset number is in direct proportion to the distance value, that is, when the distance value is small, the number of the selected microphones in the second preset number is small, so that the voice interaction between the user and the robot can be ensured, and the energy consumption is reduced. For example, referring to fig. 5, the second preset number may be 3 when the distance value is 10 meters, and may be 1 when the distance value is 1 meter.

S404: enhancing a second preset number of microphones.

As can be seen from the above description, in this embodiment, the number of microphones to be enhanced is determined according to the azimuth angle and the distance value in the azimuth parameter, so that not only the number of microphones can be ensured to meet the requirement of voice recognition of a user, but also the number of microphones is ensured not to be too many, which increases power consumption.

In an embodiment of the present invention, the process of distributing the voice command to the application program corresponding to the command semantic in step S203 includes:

converting the voice command into text information;

extracting characteristic keywords in the text information, and inquiring a pre-stored database to obtain an application program interface corresponding to the characteristic keywords;

and sending the text information to an application program corresponding to the application interface for processing.

In this embodiment, the pre-stored database of the robot pre-stores the corresponding relationship between the feature keywords and the application program interface.

Fig. 6 is a block diagram of a voice interaction apparatus according to an embodiment of the present invention, which corresponds to the voice interaction method of the foregoing embodiment. For convenience of explanation, only portions related to the embodiments of the present invention are shown. Referring to fig. 6, the apparatus is applied to a robot, and includes: a microphone enhancement module 501, a voice signal noise value determination module 502 and a voice instruction distribution module 503.

The microphone enhancement module 501 is configured to enhance a corresponding microphone in the microphone array according to the user position;

a voice signal noise value determining module 502, configured to receive the voice instruction of the user sent by the microphone, and determine a voice signal noise value of the voice instruction;

the voice instruction distribution module 503 is configured to determine a distribution mode of the voice instruction according to the voice signal noise value, and distribute the voice instruction to a corresponding application program according to the distribution mode for processing.

Referring to fig. 6, in an embodiment of the present invention, the voice instruction distribution module 503 includes:

a distribution mode determining unit 5031, configured to determine, if the noise value of the voice signal is greater than a preset threshold, that a distribution mode of the voice instruction is a noisy mode;

an instruction semantic recognition unit 5032, configured to recognize an instruction semantic of the voice instruction in the noisy mode;

a voice instruction distributing unit 5033, configured to distribute the voice instruction to an application program corresponding to the instruction semantic for processing if it is determined that voice broadcast is not currently performed, and end the voice instruction distributing process if it is determined that voice broadcast is currently performed.

Referring to fig. 6, in an embodiment of the present invention, the distribution mode determining unit 5031 is further configured to determine the distribution mode of the voice instruction as a quiet mode if the noise value of the voice signal is less than or equal to the preset threshold;

the instruction semantic recognition unit 5032 is further configured to recognize an instruction semantic of the voice instruction in the silent mode;

the voice instruction distributing unit 5033 is further configured to distribute the voice instruction to the application program corresponding to the instruction semantic for processing if it is determined that voice broadcasting is not currently performed, and end voice broadcasting if it is determined that voice broadcasting is currently performed, and distribute the voice instruction to the application program corresponding to the instruction semantic for processing.

Referring to fig. 6, in an embodiment of the present invention, the microphone enhancement module 501 is specifically configured to acquire location parameters of the user, where the location parameters include an azimuth angle and a distance value;

determining a first preset number of microphones in the microphone array within the azimuth preset angle range according to the azimuth;

selecting a second preset number of microphones from the first preset number of microphones according to the distance value;

enhancing the second preset number of microphones.

Referring to fig. 6, in an embodiment of the present invention, the voice instruction distribution unit 5033 is further configured to

Converting the voice command into text information;

extracting characteristic keywords in the text information, and inquiring a pre-stored database to obtain an application program interface corresponding to the characteristic keywords;

and sending the text information to an application program corresponding to the application interface for processing.

Referring to fig. 7, fig. 7 is a schematic block diagram of a robot according to an embodiment of the present invention. The terminal 600 in the present embodiment shown in fig. 7 may include: one or more processors 601, one or more input devices 602, one or more output devices 603, and one or more memories 604. The processor 601, the input device 602, the output device 603 and the memory 604 are all connected to each other via a communication bus 605. The memory 604 is used to store a computer program comprising program instructions. Processor 601 is operative to execute program instructions stored in memory 604. Wherein the processor 601 is configured to call the program instructions to perform the following functions of operating each module/unit in each device embodiment described above, for example, the functions of the modules 501 to 503 shown in fig. 5.

It should be understood that, in the embodiment of the present invention, the Processor 601 may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The input device 602 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 603 may include a display (LCD, etc.), a speaker, etc.

The memory 604 may include both read-only memory and random access memory, and provides instructions and data to the processor 601. A portion of the memory 604 may also include non-volatile random access memory. For example, the memory 604 may also store device type information.

In a specific implementation, the processor 601, the input device 602, and the output device 603 described in this embodiment of the present invention may execute the implementation manners described in the first embodiment and the second embodiment of the service request method provided in this embodiment of the present invention, and may also execute the implementation manner of the terminal described in this embodiment of the present invention, which is not described herein again.

In another embodiment of the present invention, a computer-readable storage medium is provided, which stores a computer program, where the computer program includes program instructions, where the program instructions implement all or part of the procedures in the method of the above embodiments when executed by a processor, and may also be implemented by a computer program instructing associated hardware, where the computer program may be stored in a computer-readable storage medium, and where the computer program can implement the steps of the above method embodiments when executed by a processor. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.

The computer readable storage medium may be an internal storage unit of the terminal according to any of the foregoing embodiments, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.

Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于人工智能的语音增强方法、服务器及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!