Multi-device voice wake-up implementation method and device, electronic device and medium

文档序号:1002367 发布日期:2020-10-23 浏览:6次 中文

阅读说明:本技术 多设备语音唤醒实现方法及设备、电子设备和介质 (Multi-device voice wake-up implementation method and device, electronic device and medium ) 是由 常先堂 蒋习旺 寇鹏 于 2020-07-20 设计创作,主要内容包括:本公开提供一种多设备语音唤醒实现方法及设备、电子设备和介质。本公开涉及语音交互领域。根据本公开的多设备语音唤醒实现方法包括:获取由第一设备中的多个不同麦克风接收的从音频信号源发出的包含相应唤醒词的音频信号;基于不同麦克风接收的音频信号的相位差确定不同麦克风与该音频信号源之间的距离差;基于该距离差定位音频信号源的位置;确定第一设备中多个不同麦克风的几何中心以作为第一设备的位置;以及根据音频信号源的位置和第一设备的位置,控制第一设备做出应答。(The disclosure provides a multi-device voice wake-up implementation method and device, electronic device and medium. The present disclosure relates to the field of voice interaction. The multi-device voice wake-up implementation method comprises the following steps: acquiring audio signals which are received by a plurality of different microphones in the first equipment and sent from an audio signal source and contain corresponding awakening words; determining a distance difference between different microphones and an audio signal source based on a phase difference of audio signals received by the different microphones; locating a position of the audio signal source based on the distance difference; determining a geometric center of a plurality of different microphones in the first device as a location of the first device; and controlling the first device to respond according to the position of the audio signal source and the position of the first device.)

1. A multi-device voice wake-up implementation method comprises the following steps:

acquiring audio signals which are received by a plurality of different microphones in the first equipment and sent from an audio signal source and contain corresponding awakening words;

determining a distance difference between the different microphone and the audio signal source based on a phase difference of the audio signals received by the different microphone;

locating a position of the audio signal source based on the distance difference;

locating a position of the first device according to positions of the plurality of different microphones in the first device; and

and controlling the first equipment to respond according to the position of the audio signal source and the position of the first equipment.

2. The method of claim 1, determining a distance difference between the different microphone and the source of the audio signal based on a phase difference of the audio signal received by the different microphone comprises:

acquiring phase differences of audio signals received by microphones in at least two different groups, wherein each group comprises two microphones; and

and determining the distance difference between the two microphones in each group and the audio signal source according to the phase difference and the sampling frequency of the audio signal acquired by the first equipment.

3. The method of claim 2, locating the location of the audio signal source based on the distance difference comprises:

respectively drawing hyperbolas according to the distance difference corresponding to each group; and

and positioning the position of the audio signal source based on the intersection point of the hyperbolas.

4. The method of claim 1, controlling the first device to respond based on the location of the audio signal source and the location of the first device comprises:

determining a distance between the first device and the audio signal source based on the location of the audio signal source and the location of the first device;

receiving a distance between a second device and the audio signal source, wherein the second device is other devices except the first device in the local area network;

comparing the distance between the first equipment and the audio signal source with the acquired distance between the second equipment and the audio signal source; and

and controlling the first equipment to respond in response to the equipment corresponding to the minimum distance being the first equipment.

5. The method of claim 3, further comprising performing the following operations in a loop:

causing a plurality of different microphones of the first device to receive audio signals emitted from a second device, wherein the second device is a device within the local area network other than the first device;

acquiring phase differences of audio signals received by microphones in at least two different groups, wherein each group comprises two microphones;

determining a distance difference between the two microphones in each group and the second device according to the phase difference and a sampling frequency of the first device for acquiring the audio signals; and

and respectively drawing hyperbolas according to the distance difference corresponding to each group so as to position and store the positions of the second equipment based on the intersection points of the hyperbolas until the first equipment stores all the positions of the second equipment.

6. The method of claim 5, controlling the first device to respond based on the location of the audio signal source and the location of the first device comprises:

sequentially determining a first included angle between each equipment pair and each audio signal source by using the first equipment as an origin point according to the position of the first equipment, the position of the second equipment and the position of the audio signal source, wherein the equipment pair is formed by the first equipment and each currently online second equipment;

sequentially determining a second included angle between each equipment pair and the audio signal source by taking the second equipment as an origin according to the position of the first equipment, the position of the second equipment and the position of the audio signal source; and

and controlling the first equipment to respond according to the first included angle and the second included angle.

7. The method of claim 6, controlling the first device to respond according to the first angle and the second angle comprises:

in response to that the first included angle in each device pair is larger than the second included angle and the angle difference value is larger than a threshold value, determining that the first device is the device closest to the audio signal source and enabling the first device to respond; and

in response to the first included angle in each of the device pairs being greater than the second included angle and there being one or more of the device pairs for which the difference in their angles is not greater than the threshold, performing the following:

obtaining a volume value of the second device in the one or more device pairs, wherein the volume value is an average value of the amplitudes of the audio signals which are received by all microphones in the devices and contain the corresponding wake-up words;

comparing the volume value received by the first device and the volume value received by the second device in the one or more device pairs respectively; and

controlling the first device of the one or more device pairs to respond in response to the volume values received by the first device being greater than the volume values received by the second device.

8. The method of claim 4 or 7, further comprising:

sending the IP address and the port number of the first device to the second device in a UDP multicast mode so as to communicate with the second device based on the IP address and the port number; and

and receiving a heartbeat packet sent by the second equipment in a UDP multicast mode at regular time so as to determine that the second equipment is online currently.

9. The method of claim 1, the first device having at least 3 microphones.

10. A multi-device voice wake-up implementing device, comprising:

the audio receiving unit is configured to acquire audio signals which are received by a plurality of different microphones in the first equipment and sent from an audio signal source and contain corresponding awakening words;

a calculation unit configured to determine a distance difference between the different microphone and the audio signal source based on a phase difference of the audio signals received by the different microphone;

a first positioning unit configured to locate a position of the audio signal source based on the distance difference;

a second positioning unit configured to position the first device according to positions of the plurality of different microphones in the first device; and

and the determining unit is configured to control the first equipment to respond according to the position of the audio signal source and the position of the first equipment.

11. The apparatus of claim 10, the computing unit configured to:

acquiring phase differences of audio signals received by microphones in at least two different groups, wherein each group comprises two microphones; and

and determining the distance difference between the two microphones in each group and the audio signal source according to the phase difference and the sampling frequency of the audio signal acquired by the first equipment.

12. The apparatus of claim 11, the first positioning unit configured to:

respectively drawing hyperbolas according to the distance difference corresponding to each group; and

and positioning the position of the audio signal source based on the intersection point of the hyperbolas.

13. The device of claim 10, the determination unit configured to:

determining a distance between the first device and the audio signal source based on the location of the audio signal source and the location of the first device;

receiving a distance between a second device and the audio signal source, wherein the second device is other devices except the first device in the local area network;

comparing the distance between the first equipment and the audio signal source with the acquired distance between the second equipment and the audio signal source; and

and controlling the first equipment to respond in response to the equipment corresponding to the minimum distance being the first equipment.

14. The apparatus of claim 12, further comprising a third positioning unit configured to cyclically:

causing a plurality of different microphones of the first device to receive audio signals emitted from a second device, wherein the second device is a device within the local area network other than the first device;

acquiring phase differences of audio signals received by microphones in at least two different groups, wherein each group comprises two microphones;

determining a distance difference between the two microphones in each group and the second device according to the phase difference and a sampling frequency of the first device for acquiring the audio signals; and

and respectively drawing hyperbolas according to the distance difference corresponding to each group so as to position and store the positions of the second equipment based on the intersection points of the hyperbolas until the first equipment stores all the positions of the second equipment.

15. The device of claim 14, the determination unit configured to:

sequentially determining a first included angle between each equipment pair and each audio signal source by using the first equipment as an origin point according to the position of the first equipment, the position of the second equipment and the position of the audio signal source, wherein the equipment pair is formed by the first equipment and each currently online second equipment;

sequentially determining a second included angle between each equipment pair and the audio signal source by taking the second equipment as an origin according to the position of the first equipment, the position of the second equipment and the position of the audio signal source; and

and controlling the first equipment to respond according to the first included angle and the second included angle.

16. The apparatus of claim 15, the determining unit further configured to:

in response to that the first included angle in each device pair is larger than the second included angle and the angle difference value is larger than a threshold value, determining that the first device is the device closest to the audio signal source and enabling the first device to respond; and

in response to the first included angle in each of the device pairs being greater than the second included angle and there being one or more of the device pairs for which the difference in their angles is not greater than the threshold, performing the following:

obtaining a volume value of the second device in the one or more device pairs, wherein the volume value is an average value of the amplitudes of the audio signals which are received by all microphones in the devices and contain the corresponding wake-up words;

comparing the volume value received by the first device and the volume value received by the second device in the one or more device pairs respectively; and

controlling the first device of the one or more device pairs to respond in response to the volume values received by the first device being greater than the volume values received by the second device.

17. The apparatus of claim 13 or 16, further comprising a communication unit configured to:

sending the IP address and the port number of the first device to the second device in a UDP multicast mode so as to communicate with the second device based on the IP address and the port number; and

and receiving a heartbeat packet sent by the second equipment in a UDP multicast mode at regular time so as to determine that the second equipment is online currently.

18. The device of claim 10, the first device having at least 3 microphones.

19. An electronic device, comprising:

a processor; and

a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1-9.

20. A computer readable storage medium storing a program, the program comprising instructions that when executed by a processor of an electronic device cause the electronic device to perform the method of any of claims 1-9.

Technical Field

The present disclosure relates to the field of voice interaction, and in particular, to a method and device for implementing voice wakeup of multiple devices, an electronic device, and a medium.

Background

The smart device is typically voice-wake enabled, and is capable of being voice-woken when a user speech outputs a corresponding wake-up word. However, in a home lan environment, if there are multiple smart devices of the same manufacturer in the home, it is conventional that these devices can be simultaneously woken up by voice, which is often not the case for users.

The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.

Disclosure of Invention

According to one aspect of the disclosure, a method for implementing voice wakeup of multiple devices is provided, which includes: acquiring audio signals which are received by a plurality of different microphones in the first equipment and sent from an audio signal source and contain corresponding awakening words; determining a distance difference between different microphones and an audio signal source based on a phase difference of the audio signal received by the different microphones; locating a position of the audio signal source based on the distance difference; locating a position of a first device based on positions of a plurality of different microphones in the first device; and controlling the first device to respond according to the position of the audio signal source and the position of the first device.

According to another aspect of the present disclosure, there is provided a multi-device voice wake-up implementing device, including: the audio receiving unit is configured to acquire audio signals which are received by a plurality of different microphones in the first equipment and sent from an audio signal source and contain corresponding awakening words; a calculation unit configured to determine a distance difference between different microphones and an audio signal source based on a phase difference of audio signals received by the different microphones; a first positioning unit configured to locate a position of the audio signal source based on the distance difference; a second positioning unit configured to position the first device according to positions of a plurality of different microphones in the first device; and a determination unit configured to control the first device to respond according to the position of the audio signal source and the position of the first device.

According to another aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the multi-device voice wake-up implementation methods described in this disclosure.

According to another aspect of the present disclosure, there is provided a computer readable storage medium storing a program, the program comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform a multi-device voice wake-up implementation method described in the present disclosure.

According to one aspect of the disclosure, preferential wake-up of appropriate devices in a multi-device environment can be achieved, avoiding all devices in the environment from being woken up simultaneously.

According to the other aspect of the disclosure, individual differences of equipment caused by different aging degrees, consistency differences and the like of devices can be effectively avoided.

These and other aspects of the disclosure will be apparent from and elucidated with reference to the embodiments described hereinafter.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.

FIG. 1 is a flow chart illustrating a multi-device voice wake-up implementation method of an exemplary embodiment;

fig. 2 is a flowchart illustrating an acknowledgement in accordance with controlling a first device in an exemplary embodiment;

FIG. 3 is a schematic diagram including a first included angle and a second included angle illustrating an exemplary embodiment;

FIG. 4 is a schematic diagram illustrating a multi-device voice wake-up implementing device of an exemplary embodiment; and

fig. 5 is a block diagram showing an exemplary computing device to which the exemplary embodiments can be applied.

Detailed Description

In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.

The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.

Existing smart devices (also referred to as devices, such as smart speakers, in the following) generally support voice wake-up, for example, a wake-up word is set as: "degree of smallness", when the user speech output: when the degree is small, the intelligent sound box can be awakened by voice. However, in the home lan environment, if there are multiple intelligent devices in the home, these intelligent devices can support voice wake-up simultaneously when the user outputs the wake-up word by voice, but this is often not in accordance with the user's expectations. The mental expectation of a user may be that one of the smart devices (e.g., closest to the user) is awake while the other smart devices are not.

FIG. 1 is a flow chart illustrating a multi-device voice wake-up implementation method of an exemplary embodiment. As shown in fig. 1, the method includes: acquiring audio signals received by a plurality of different microphones in the first device from an audio signal source containing respective wake-up words (step 110); determining distance differences between different microphones and the audio signal sources based on phase differences of the audio signals received by the different microphones (step 120); locating a position of the audio signal source based on the distance difference (step 130); locating a position of the first device based on positions of a plurality of different microphones in the first device (step 140); and controlling the first device to respond according to the position of the audio signal source and the position of the first device (step 150).

In step 110, audio signals received by a plurality of different microphones in the first device emanating from an audio signal source containing respective wake-up words are acquired.

The audio signal source is a user, the user shouts the corresponding awakening word to send out an audio signal, and the audio signal can be received by a microphone of the device.

In step 120, a distance difference between different microphones and an audio signal source is determined based on a phase difference of audio signals received by the different microphones.

According to some embodiments, determining the difference in distance between the different microphones and the source of the audio signal based on the phase difference of the audio signals received by the different microphones comprises: acquiring phase differences of audio signals received by microphones in at least two different groups, wherein each group comprises two microphones; and determining the distance difference between the two microphones in each group and the audio signal source according to the phase difference and the sampling frequency of the audio signal collected by the first device.

According to some embodiments, the correlation function is used to calculate the correlation between the audio signals received by the two microphones in each group, when the correlation coefficient of the audio signals received by the two microphones is the largest, the time difference between the audio signals received by the two microphones can be obtained, and the distance difference between the two microphones and the audio signal source can be obtained based on the time difference and the speed of the sound wave.

According to some embodiments, the first device has at least 3 microphones.

In some examples, for example, the device includes three microphones, M1, M2, and M3, respectively, at least two different groups: at least two of the groups formed by M1 and M2, M1 and M3, and M2 and M3.

In step 130, the location of the source of the audio signal is located based on the distance difference.

According to some embodiments, localizing the location of the audio signal source based on the distance difference comprises: respectively drawing hyperbolas according to the distance difference corresponding to each group; and locating the location of the source of the audio signal based on the intersection of the hyperbolas.

A hyperbola is a locus of points where a distance difference from two fixed points (called focal points) is constant. Therefore, with the two microphones of each group in focus, the locus of a set of points (i.e., the combination of possible audio signal source locations) whose distance difference from the two microphones is the calculated distance difference can be plotted, i.e., a hyperbola can be plotted. The intersection point of the two different hyperbolas is the position of the positioned audio signal source.

In step 140, the location of the first device is located based on the locations of the plurality of different microphones in the first device.

According to some embodiments, a geometric center of a polygon formed by the plurality of different microphones may be taken as a location point of the first device. Alternatively, in some examples, the geometric center of gravity of a polygon formed by the plurality of different microphones may also be taken as the location point of the first device. Alternatively, in some examples, the location of the first device may also be located according to an average of the coordinates of the plurality of different microphones. Alternatively, in some examples, the position of one of the microphones may also be taken as the position of the first device. It should be understood that other methods that may be used to locate the position of the first device are also possible.

In step 150, the first device is controlled to respond based on the location of the source of the audio signal and the location of the first device.

According to some embodiments, controlling the first device to respond according to the location of the audio signal source and the location of the first device comprises: determining a distance between the first device and the audio signal source based on the location of the audio signal source and the location of the first device; receiving the distance between second equipment and an audio signal source, wherein the second equipment is other equipment except the first equipment in the local area network; comparing the distance between the first equipment and the audio signal source with the acquired distance between the second equipment and the audio signal source; and responding to the device corresponding to the minimum distance as the first device, and controlling the first device to respond.

According to some embodiments, further comprising: the IP address and the port number of the first device are sent to the second device in a UDP multicast mode so as to communicate with the second device based on the IP address and the port number; and receiving a heartbeat packet sent by the second equipment in a UDP multicast mode at regular time so as to determine that the second equipment is online currently.

In some examples, the local area network IP address and port number of the first device are packaged in advance and sent out in a UDP multicast mode, and the second device in the local area network receives and unpacks the UDP multicast data packet, acquires and stores the IP address and port number information of the first device. That is, each intelligent device in the local area network sends out preset UDP multicast data packet information; and other intelligent equipment receiving the UDP multicast data packet unpacks the data packet, and can acquire the local area network IP and PORT PORT number information of other intelligent equipment when unpacking data. Therefore, all intelligent devices in the local area network can discover each other. After discovering the other party (knowing the local area network IP and PORT number of the other party), the intelligent devices in the local area network can transmit some needed information to other intelligent devices in a data packet mode.

In some examples, each intelligent device calculates the distance between the intelligent device and the audio signal source through the above process, and sends the calculated distance value to other intelligent devices in the local area network through a UDP unicast manner according to the IP address and the port number of the other device, so that the distance value between the own device and the audio signal source and the distance values between all other intelligent devices and the audio signal source exist on each intelligent device in the local area network. And comparing and calculating on each intelligent device, comparing all the stored distance values, finding out the minimum distance value, responding when the minimum distance value corresponds to the intelligent device, and at the moment, the intelligent device is really awakened, can receive the voice interaction request of the user, and can respond to the voice request of the user. In some examples, each smart device sends a heartbeat packet to the local area network periodically (e.g., every 10 seconds) via UDP multicast to indicate that it is online. The heartbeat packet can be received by other online devices in the local area network, namely, the online information of the device which sends the heartbeat packet is obtained. Wherein each smart device transmits its distance value from the audio signal source to the on-line device, i.e., does not transmit the distance value to a certain device if the device does not receive a heartbeat packet for the certain device for a long time (e.g., 1 minute, 2 minutes, etc.) before transmitting its distance value.

According to some embodiments, further comprising performing the following operations in a loop: enabling a plurality of different microphones of the first device to receive audio signals emitted from a second device, wherein the second device is other than the first device in the local area network; acquiring phase differences of audio signals received by microphones in at least two different groups, wherein each group comprises two microphones; determining the distance difference between the two microphones in each group and the second equipment according to the phase difference and the sampling frequency of the audio signal collected by the first equipment; and respectively drawing hyperbolas according to the distance difference corresponding to each group to position and store the positions of the second devices based on the intersection points of the hyperbolas until the first device stores all the positions of the second devices.

In some examples, all the intelligent devices in the lan are enabled to sequentially emit audio signals, for example, songs are sequentially played, and the other intelligent devices receive the audio signals in the wake-up state and perform calculation to locate the position of the device emitting the audio signals until position information of all other devices in the lan is stored on each device. After that, the multi-device voice wake-up implementation method of the present disclosure can be implemented according to the stored location information.

According to some embodiments, as shown in fig. 2, controlling the first device to respond according to the location of the audio signal source and the location of the first device comprises: sequentially determining a first included angle between each equipment pair and each audio signal source by using the first equipment as an origin according to the position of the first equipment, the position of the second equipment and the position of the audio signal source, wherein the equipment pair is formed by the first equipment and each currently online second equipment (step 210); according to the position of the first device, the position of the second device and the position of the audio signal source, sequentially determining a second included angle between the first device and the audio signal source, which takes the second device as an origin, in each device pair (step 220); and controlling the first equipment to respond according to the first included angle and the second included angle.

According to some embodiments, controlling the first device to respond according to the first angle and the second angle comprises: if the first angle in each pair is greater than the second angle and the difference between the angles is greater than the threshold, determining that the first device is the closest device to the source of the audio signal and causing the first device to respond (step 240, yes); and if the first included angle in each device pair is greater than the second included angle and there are one or more device pairs for which the angular difference is not greater than the threshold (step 250, yes), performing the following: obtaining a volume value of a second device of the one or more device pairs, wherein the volume value is an average value of the amplitudes of the audio signals containing the corresponding wake-up words received by all microphones in the devices (step 260); comparing the volume value received by the first device and the volume value received by the second device in one or more device pairs respectively; and controlling the first device to respond if the volume value received by the first device is greater than the volume value received by the second device in the one or more device pairs (step 270, yes).

When only the volume values are compared, a large error may be caused by individual differences of the apparatuses due to differences in the aging degrees of the devices, differences in the consistency, and the like. Compared with a multi-device voice awakening method which is realized by only comparing volume values (namely awakening energy values) received by a plurality of devices, the multi-device voice awakening method realized by the present disclosure is more accurate, so that a proper device is awakened.

In addition, in the embodiment of the disclosure, the included angle difference threshold is set, and the volume value is further compared when the difference is smaller than the threshold, so that the accuracy is further improved.

According to some embodiments, prior to determining the included angle of each device, a second device currently in place is also determined (step 210). In some examples, the second device sends a heartbeat packet to the local area network periodically (e.g., every 10 seconds) in a UDP multicast manner to indicate that it is online. The heartbeat packet may be received by the first device, that is, the online information of the second device that sent the heartbeat packet is obtained. If the first device does not receive the heartbeat packet of a certain second device for a long time (for example, 1 minute, 2 minutes, and the like), it indicates that the second device is not online, and at this time, the first device does not form a device pair with the second device to calculate the first angle and the second angle according to the position therebetween, that is, the second device which is not online does not participate in subsequent calculation. Thus, computational resources of the device are saved and operating efficiency is improved.

According to some embodiments, further comprising: the IP address and the port number of the first device are sent to the second device in a UDP multicast mode so as to communicate with the second device based on the IP address and the port number; and receiving a heartbeat packet sent by the second equipment in a UDP multicast mode at regular time so as to determine that the second equipment is online currently. In some examples, the volume value may be transmitted over UDP unicast based on the IP address and port number.

In some examples, 3, taking device 2 as an example (i.e., device 2 is the first device), when device 1 and device 3 are both online, device 2 and device 1 form a device pair, and device 2 and device 3 form a device pair. Wherein, in the device pair consisting of the device 2 and the device 1, the first included angle is an angle A1, and the second included angle is an angle B1; in the device pair consisting of device 2 and device 3, the first included angle is angle a2 and the second included angle is angle B2.

According to another aspect of the present disclosure, as shown in fig. 4, there is also provided a multi-device voice wake-up implementing device, including:

an audio receiving unit 410 configured to acquire audio signals containing respective wake-up words emitted from an audio signal source received by a plurality of different microphones in the first device; a calculation unit 420 configured to determine a distance difference between different microphones and an audio signal source based on a phase difference of audio signals received by the different microphones; a first positioning unit 430 configured to locate a position of the audio signal source based on the distance difference; a second positioning unit 440 configured to position the first device according to positions of a plurality of different microphones in the first device; and a determining unit 450 configured to control the first device to respond according to the location of the audio signal source and the location of the first device.

According to some embodiments, the computing unit is configured to perform the following operations: acquiring phase differences of audio signals received by microphones in at least two different groups, wherein each group comprises two microphones; and determining the distance difference between the two microphones in each group and the audio signal source according to the phase difference and the sampling frequency of the audio signal acquired by the first equipment.

According to some embodiments, the first positioning unit is configured to perform the following operations: respectively drawing hyperbolas according to the distance difference corresponding to each group; and locating the location of the source of the audio signal based on the intersection of the hyperbolas.

According to some embodiments, the second positioning unit is configured to take a geometric center of a polygon formed by the plurality of different microphones as the location point of the first device. Alternatively, in some examples, the geometric center of gravity of a polygon formed by the plurality of different microphones may be configured as the location point of the first device. Alternatively, in some examples, it may also be configured to locate the position of the first device according to an average of the coordinates of the plurality of different microphones. Alternatively, in some examples, the position of one of the microphones may be configured as the position of the first device. It will be appreciated that other configurations are possible which may be used to locate the position of the first device.

According to some embodiments, the determining unit is configured to perform the following operations: determining a distance between the first device and the audio signal source based on the location of the audio signal source and the location of the first device; receiving the distance between second equipment and an audio signal source, wherein the second equipment is other equipment except the first equipment in the local area network; comparing the distance between the first equipment and the audio signal source with the acquired distance between the second equipment and the audio signal source; and responding to the device corresponding to the minimum distance as the first device, and controlling the first device to respond.

According to some embodiments, the positioning device further comprises a third positioning unit configured to cyclically perform the following operations: enabling a plurality of different microphones of the first device to receive audio signals emitted from a second device, wherein the second device is other than the first device in the local area network; acquiring phase differences of audio signals received by microphones in at least two different groups, wherein each group comprises two microphones; determining the distance difference between the two microphones in each group and the second equipment according to the phase difference and the sampling frequency of the audio signal collected by the first equipment; and respectively drawing hyperbolas according to the distance difference corresponding to each group to position and store the positions of the second devices based on the intersection points of the hyperbolas until the first device stores all the positions of the second devices.

According to some embodiments, the determining unit is configured to perform the following operations: sequentially determining a first included angle between each equipment pair and each audio signal source by using the first equipment as an original point according to the position of the first equipment, the position of the second equipment and the position of the audio signal source, wherein each equipment pair is composed of the first equipment and each currently online second equipment; according to the position of the first equipment, the position of the second equipment and the position of the audio signal source, sequentially determining a second included angle between the first equipment and the audio signal source, wherein the second equipment is used as an origin in each equipment pair; and controlling the first equipment to respond according to the first included angle and the second included angle.

According to some embodiments, the determining unit is further configured to perform the following operations: responding to the fact that the first included angle in each equipment pair is larger than the second included angle and the angle difference value of the first included angle is larger than the threshold value, determining that the first equipment is the equipment closest to the audio signal source, and enabling the first equipment to respond; and in response to the first included angle in each device pair being greater than the second included angle and there being one or more device pairs for which the difference in angle is not greater than a threshold, performing the following: obtaining a volume value of a second device in the one or more device pairs, wherein the volume value is an average value of the amplitude values of the audio signals which are received by all microphones in the devices and contain the corresponding awakening words; comparing the volume value received by the first device and the volume value received by the second device in one or more device pairs respectively; and controlling the first device to respond in response to the volume values received by the first device of the one or more pairs of devices each being greater than the volume value received by the second device.

According to some embodiments, further comprising a communication unit configured to: the IP address and the port number of the first device are sent to the second device in a UDP multicast mode so as to communicate with the second device based on the IP address and the port number; and receiving a heartbeat packet sent by the second equipment in a UDP multicast mode at regular time so as to determine that the second equipment is online currently.

According to some embodiments, the first device has at least 3 microphones.

Here, the operations of the above units of the multi-device voice wakeup implementing device 400 are respectively similar to the operations of the steps described above, and are not described again here.

According to another aspect of the present disclosure, there is also provided an electronic device, which may include: a processor; and a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform a multi-device voice wake-up implementation method according to the above.

According to another aspect of the present disclosure, there is also provided a computer readable storage medium storing a program, the program comprising instructions which, when executed by a processor of an electronic device, cause the electronic device to perform a multi-device voice wake-up implementation method according to the above.

Referring to fig. 5, a computing device 2000, which is an example of a hardware device (electronic device) that may be applied to aspects of the present disclosure, will now be described. The computing device 2000 may be any machine configured to perform processing and/or computing, and may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, a personal digital assistant, a robot, a smart phone, an on-board computer, or any combination thereof. The multi-device voice wake-up implementation described above may be implemented in whole or at least in part by computing device 2000 or a similar device or system.

Computing device 2000 may include elements to connect with bus 2002 (possibly via one or more interfaces) or to communicate with bus 2002. For example, computing device 2000 may include a bus 2002, one or more processors 2004, one or more input devices 2006, and one or more output devices 2008. The one or more processors 2004 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., special processing chips). Input device 2006 may be any type of device capable of inputting information to computing device 2000 and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote control. Output device 2008 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The computing device 2000 may also include or be connected with a non-transitory storage device 2010, which may be any storage device that is non-transitory and that may enable data storage, and may include, but is not limited to, a magnetic disk drive, an optical storage device, solid state memory, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other magnetic medium, an optical disk or any other optical medium, a ROM (read only memory), a RAM (random access memory), a cache memory, and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions, and/or code. The non-transitory storage device 2010 may be removable from the interface. The non-transitory storage device 2010 may have data/programs (including instructions)/code for implementing the above-described methods and steps. Computing device 2000 may also include a communication device 2012. The communication device 2012 may be any type of device or system that enables communication with external devices and/or with a network and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication devices, and/or chipsets such as bluetooth (TM) devices, 1302.11 devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.

The computing device 2000 may also include a working memory 2014, which may be any type of working memory that can store programs (including instructions) and/or data useful for the operation of the processor 2004, and may include, but is not limited to, random access memory and/or read only memory devices.

Software elements (programs) may be located in the working memory 2014 including, but not limited to, an operating system 2016, one or more application programs 2018, drivers, and/or other data and code. Instructions for performing the above-described methods and steps may be included in one or more applications 2018, and the above-described multi-device voice wake-up implementation method may be implemented by instructions for reading and executing the one or more applications 2018 by the processor 2004. More specifically, in the above-mentioned multi-device voice wake-up implementation method, the steps 110 to 150 may be implemented, for example, by the processor 2004 executing the application 2018 having the instructions of the steps 110 to 150. Further, other steps in the multi-device wake-on-speech implementation method described above may be implemented, for example, by the processor 2004 executing the application 2018 with instructions to perform the respective steps. Executable code or source code of instructions of the software elements (programs) may be stored in a non-transitory computer-readable storage medium (such as the storage device 2010 described above) and, upon execution, may be stored in the working memory 2014 (possibly compiled and/or installed). Executable code or source code for the instructions of the software elements (programs) may also be downloaded from a remote location.

It will also be appreciated that various modifications may be made in accordance with specific requirements. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, some or all of the disclosed methods and apparatus may be implemented by programming hardware (e.g., programmable logic circuitry including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language such as VERILOG, VHDL, C + +, using logic and algorithms according to the present disclosure.

It should also be understood that the foregoing method may be implemented in a server-client mode. For example, a client may receive data input by a user and send the data to a server. The client may also receive data input by the user, perform part of the processing in the foregoing method, and transmit the data obtained by the processing to the server. The server may receive data from the client and perform the aforementioned method or another part of the aforementioned method and return the results of the execution to the client. The client may receive the results of the execution of the method from the server and may present them to the user, for example, through an output device.

It should also be understood that the components of computing device 2000 may be distributed across a network. For example, some processes may be performed using one processor while other processes may be performed by another processor that is remote from the one processor. Other components of the computing system 2000 may also be similarly distributed. As such, the computing device 2000 may be interpreted as a distributed computing system that performs processing at multiple locations.

Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于声阵列的采煤机精确定位系统及其方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!