Method, device and equipment for transmitting voice signal and readable storage medium

文档序号:1522527 发布日期:2020-02-11 浏览:25次 中文

阅读说明:本技术 语音信号的发送方法、装置、设备及可读存储介质 (Method, device and equipment for transmitting voice signal and readable storage medium ) 是由 曹木勇 周佳庆 于 2019-11-06 设计创作,主要内容包括:本申请公开了一种语音信号的发送方法、装置、设备及可读存储介质,涉及多媒体处理领域。该方法包括:对目标语音信号中的语音帧进行人声概率检测,得到人声帧;获得第一人声帧和第二人声帧,第一人声帧的人声概率大于或者等于第一要求概率,第二人声帧的人声概率小于第一要求概率;将第二人声帧归一化为静音帧;对第一人声帧和静音帧进行变长编码;对音频编码流进行发送。在对目标语音信号进行编码之前,对该目标语音信号中人声概率较低的第二人声帧归一化为静音帧,并对目标语音信号通过变长编码方式编码,静音帧的编码结果长度小于第二人声帧的编码结果长度,减小编码长度,降低该目标语音信号在发送过程中的带宽占用。(The application discloses a method, a device and equipment for sending a voice signal and a readable storage medium, and relates to the field of multimedia processing. The method comprises the following steps: carrying out voice probability detection on voice frames in the target voice signal to obtain voice frames; acquiring a first human voice frame and a second human voice frame, wherein the human voice probability of the first human voice frame is greater than or equal to a first requirement probability, and the human voice probability of the second human voice frame is less than the first requirement probability; normalizing the second voice frame to a mute frame; carrying out variable length coding on the first person voice frame and the mute frame; an audio encoded stream is transmitted. Before the target voice signal is coded, a second human voice frame with low human voice probability in the target voice signal is normalized into a mute frame, the target voice signal is coded in a variable length coding mode, the length of a coding result of the mute frame is smaller than that of the second human voice frame, the coding length is reduced, and the bandwidth occupation of the target voice signal in the sending process is reduced.)

1. A method for transmitting a speech signal, the method comprising:

carrying out voice probability detection on voice frames in the target voice signal to obtain voice frames;

obtaining a first human voice frame and a second human voice frame, wherein the human voice probability of the first human voice frame is greater than or equal to a first requirement probability, and the human voice probability of the second human voice frame is less than the first requirement probability;

normalizing the second voice frame to a silence frame;

carrying out variable length coding on the first person voice frame and the mute frame to obtain an audio coding stream; the first coding length of the mute frame in a variable length coding mode is smaller than the second coding length of the second human voice frame;

and transmitting the audio coding stream.

2. The method of claim 1, wherein the normalizing the second vocal frame to a mute frame comprises:

normalizing the second vocal frame to the mute frame by modifying a digital signal value.

3. The method of claim 2, wherein the normalizing the second vocal frame to the silence frame by the modification of the digital signal value comprises:

and modifying the first digital signal value of the second human voice frame into a second digital signal value corresponding to the mute frame.

4. The method according to any one of claims 1 to 3, wherein the performing of the vocal probability detection on the speech frames in the target speech signal comprises:

determining a preset frequency band correspondingly covered by the voice in the voice detection;

and determining the voice probability of the voice frame according to the ratio of the voice frame in the preset frequency band.

5. The method of claim 4, wherein obtaining the human voice frame comprises:

when the voice probability of the voice frame is smaller than a second required probability, determining the voice frame as an environmental voice frame;

when the trailing duration of the environmental sound frame reaches the target duration, discarding the environmental sound frame;

determining the speech frames that are not dropped as the human voice frames.

6. The method according to any of claims 1 to 3, wherein before performing the vocal probability detection on the speech frames in the target speech signal, the method further comprises:

preprocessing the target voice signal; the preprocessing mode comprises at least one of resampling processing, noise reduction processing, howling suppression processing and echo cancellation processing.

7. The method according to any of claims 1 to 3, wherein said transmitting said audio encoded stream comprises:

sending the audio coding stream to a receiving terminal, wherein a real-time voice connection is established between the sending terminal and the receiving terminal;

or the like, or, alternatively,

sending the audio coding stream to a server, wherein the server comprises a real-time translation model and is used for translating the audio content corresponding to the audio coding stream in real time;

or the like, or, alternatively,

and sending the audio coding stream to a server, wherein the server comprises a real-time sound variation model and is used for carrying out sound variation processing on the audio content corresponding to the audio coding stream.

8. A transmission apparatus of a voice signal, applied to a transmission terminal, the apparatus comprising:

the determining module is used for carrying out voice probability detection on the voice frame in the target voice signal to obtain a voice frame;

the determining module is further configured to obtain a first human voice frame and a second human voice frame, where a human voice probability of the first human voice frame is greater than or equal to a first requirement probability, and a human voice probability of the second human voice frame is less than the first requirement probability;

the processing module is used for normalizing the second voice frame into a mute frame;

the coding module is used for carrying out variable length coding on the first person sound frame and the mute frame to obtain an audio coding stream; the first coding length of the mute frame in a variable length coding mode is smaller than the second coding length of the second human voice frame;

and the sending module is used for sending the audio coding stream.

9. The apparatus of claim 8, wherein the processing module is further configured to normalize the second voice frame to the silence frame by modifying a digital signal value.

10. The apparatus of claim 9, wherein the processing module is further configured to modify the first digital signal value of the second frame of human voice into a second digital signal value corresponding to the mute frame.

11. The apparatus according to any one of claims 8 to 10, wherein the determining module is further configured to determine a preset frequency band covered by a human voice in the human voice detection; and determining the voice probability of the voice frame according to the ratio of the voice frame in the preset frequency band.

12. The apparatus of claim 11, wherein the determining module is further configured to determine the speech frame as an ambient sound frame when the vocal probability of the speech frame is smaller than a second required probability; when the trailing duration of the environmental sound frame reaches the target duration, discarding the environmental sound frame; determining the speech frames that are not dropped as the human voice frames.

13. The apparatus according to any one of claims 8 to 10, wherein the processing module is further configured to pre-process the target speech signal;

the preprocessing mode comprises at least one of resampling processing, noise reduction processing, howling suppression processing and echo cancellation processing.

14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the method of transmitting a speech signal according to any one of claims 1 to 7.

15. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of transmitting a speech signal according to any one of claims 1 to 7.

Technical Field

The present invention relates to the field of multimedia processing, and in particular, to a method, an apparatus, a device and a readable storage medium for transmitting a voice signal.

Background

The real-time voice function is a function of realizing real-time communication between two terminals, in the real-time voice function, a sending terminal continuously collects audio signals through a microphone, the collected audio signals are coded, then a coded stream is sent to a receiving terminal, and the receiving terminal receives the coded stream, decodes the coded stream and plays the coded stream.

Disclosure of Invention

The embodiment of the application provides a method, a device, equipment and a readable storage medium for sending a voice signal, which can solve the problem that the transmission bandwidth cost of the voice signal is still higher when non-environment sound comprises more human sound frames with lower human sound probability. The technical scheme is as follows:

in one aspect, a method for transmitting a voice signal is provided, the method including:

carrying out voice probability detection on voice frames in the target voice signal to obtain voice frames;

obtaining a first human voice frame and a second human voice frame, wherein the human voice probability of the first human voice frame is greater than or equal to a first requirement probability, and the human voice probability of the second human voice frame is less than the first requirement probability;

normalizing the second voice frame to a silence frame;

carrying out variable length coding on the first person voice frame and the mute frame to obtain an audio coding stream; the first coding length of the mute frame in a variable length coding mode is smaller than the second coding length of the second human voice frame;

and transmitting the audio coding stream.

In another aspect, there is provided an apparatus for transmitting a voice signal, the apparatus including:

the determining module is used for carrying out voice probability detection on the voice frame in the target voice signal to obtain a voice frame;

the determining module is further configured to obtain a first human voice frame and a second human voice frame, where a human voice probability of the first human voice frame is greater than or equal to a first requirement probability, and a human voice probability of the second human voice frame is less than the first requirement probability;

the processing module is used for normalizing the second voice frame into a mute frame;

the coding module is used for carrying out variable length coding on the first person sound frame and the mute frame to obtain an audio coding stream; the first coding length of the mute frame in a variable length coding mode is smaller than the second coding length of the second human voice frame;

and the sending module is used for sending the audio coding stream.

In another aspect, a computer device is provided, which includes a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method for transmitting a voice signal according to any of the embodiments of the present application.

In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and loaded and executed by the processor to implement the method for transmitting a voice signal according to any one of the embodiments of the present application.

In another aspect, a computer program product is provided, which when run on a computer causes the computer to execute the method of transmitting a speech signal as described in any of the embodiments of the present application.

The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:

before the target speech signal is coded, a second human speech frame with low human speech probability in the target speech signal is subjected to normalization processing, the second human speech frame is normalized into a mute frame, the target speech signal is coded in a variable length coding mode, and the length of a coding result of the mute frame is smaller than that of the second human speech frame, so that the coding length of the target speech signal is reduced on the basis of not influencing speech intelligibility, the bandwidth occupation of the target speech signal in the sending process is reduced, the problem of long delay in the speech sending process is avoided for a real-time speech scene, and the transmission efficiency of the target speech signal is improved.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

FIG. 1 is a schematic diagram of an interface for enabling a real-time voice conversation function in a game application scenario according to an exemplary embodiment of the present application;

FIG. 2 is a schematic diagram of an interface for enabling a real-time voice conversation function in a game application scenario according to another exemplary embodiment of the present application;

FIG. 3 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;

fig. 4 is a flowchart of a method for transmitting a speech signal according to an exemplary embodiment of the present application;

fig. 5 is a schematic diagram of a receiving process of an audio encoded stream by a receiving terminal provided based on the embodiment shown in fig. 4;

fig. 6 is a flowchart of a method for transmitting a speech signal according to another exemplary embodiment of the present application;

fig. 7 is a flowchart of a method for transmitting a speech signal according to another exemplary embodiment of the present application;

fig. 8 is an overall flowchart of a transmitting terminal transmitting a target voice signal according to an exemplary embodiment of the present application;

fig. 9 is a block diagram of a transmitting apparatus of a voice signal according to an exemplary embodiment of the present application;

fig. 10 is a block diagram of a terminal according to an exemplary embodiment of the present application.

Detailed Description

To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

First, terms referred to in the embodiments of the present application are briefly described:

speech frame: refers to a speech original digital signal of a single time slice, and optionally, the speech frame can be implemented as any one of an ambient sound frame, a mute frame, and a human voice frame. The environment sound frame is an audio frame with smaller human voice occupation in the original voice digital signal, that is, the main sound in the environment sound frame is not the sound made by human, a required probability is usually set, and after the human voice detection is performed on the voice frame, when the human voice probability of the voice frame is smaller than the required probability, the voice frame is determined to be the environment sound frame; the mute frame refers to an original speech digital signal frame without energy; the human voice frame is an audio frame with a large human voice in the original digital signal of the voice, that is, the main voice in the human voice frame is the voice sent by the human, a required probability is usually set, and after the human voice detection is performed on the voice frame, when the human voice probability of the voice frame reaches the required probability, the voice frame is determined to be the human voice frame.

Human voice detection: refers to the process of detecting a speech frame to determine whether the speech frame belongs to an ambient or human voice frame. Optionally, after detecting the voice frame, determining the voice probability of the voice frame, determining that the voice frame belongs to a voice frame when the voice probability reaches a required probability, and determining that the voice frame belongs to an environmental voice frame when the voice probability is smaller than the required probability. Optionally, when performing the voice detection, a preset voice band is first determined, such as: the preset voice frequency band is 80Hz (Hz) to 1000Hz, and the voice probability of the voice frame is determined according to the proportion of the signals belonging to the voice frequency band in the voice frame to the whole voice frame.

Variable length coding: refers to a coding method in which the length of the coding result of the speech frame is not fixed. When coding is carried out by a variable length coding mode, the length of the coded coding result is different for the speech frames with different number of bytes of the intra-frame signals. In general, after the speech frame including one Byte signal is encoded by the variable length coding method, the length of the encoded result is about 15 bytes, and after the speech frame including two Byte signals is encoded by the variable length coding method, the length of the encoded result is about 20 bytes.

Correspondingly, the fixed-length coding mode refers to a coding method with a fixed length of a coding result of a speech frame. That is, in the fixed length encoding mode, the lengths of the encoding results obtained by encoding the mute frame, the environment sound frame and the human sound frame are consistent, and in the fixed length encoding mode, the lengths of the encoding results obtained by encoding the voice frames with different byte numbers of the voice signals are also consistent.

In a real-time voice conversation scene, the occupation of voice bandwidth needs to be reduced, and the problem of poor real-time voice effect caused by excessive time delay is avoided, wherein the real-time voice conversation scene comprises at least one of the following scenes:

firstly, in a real-time voice conversation scene, a real-time voice connection is established between at least two accounts, schematically, the real-time voice conversation scene can be a scene provided in a game application program and used for carrying out real-time voice among teammates accounts in a game; or, the real-time voice conversation scene can be a scene provided in the game application program and used for carrying out real-time voice among all the participating accounts in the game; or, the real-time voice conversation scene can be a scene provided in the game application program and used for carrying out real-time voice between any at least two accounts in the game; or, the real-time voice conversation scene can also be a scene of real-time voice between at least two accounts of the instant messaging application program; schematically, a real-time voice between a first account and a second account is taken as an example for explanation, after a voice input is performed on a sending terminal through a microphone by the first account, the sending terminal acquires a target voice signal, after voice detection is performed on the target voice signal, a voice frame with a voice probability smaller than a required probability in the target voice signal is normalized into a mute frame, the target voice signal is encoded in a variable length coding mode, and an audio coding stream is sent to the second account after the audio coding stream is obtained;

referring to fig. 1, which is a schematic diagram illustrating an interface for starting a real-time voice conversation function in a game application scenario according to an exemplary embodiment of the present application, as shown in fig. 1, a voice start control 110 and a microphone start control 120 are displayed in a game interface 100, where when the microphone control 120 is correspondingly started, a voice uttered by a user is received by a teammate player in a game-play office, and optionally, a player who receives the voice starts the voice start control 110.

Fig. 2 is a schematic interface diagram of starting a real-time voice conversation function in a game application scenario provided in another exemplary embodiment of the present application, as shown in fig. 2, a teammate display area 210 is displayed in a game interface 200, a teammate list is displayed in the teammate display area, the game interface 200 further includes a global voice start control 220 and an in-team voice control 230, where when the global voice control 220 is correspondingly started, a voice uttered by a user is received by all players in a game partner, and optionally, a player who receives the voice starts the global voice control 220; when the in-team speech control 230 is turned on, the speech uttered by the user is received by the players in the teammate list, and optionally, the teammate who accepts the speech turns on either the global speech control 220 or the in-team speech control 230.

Secondly, in a real-time translation scene, a user inputs voice through a terminal microphone, after the terminal acquires a voice signal input by the user, voice detection is carried out on the voice signal, and after a voice frame with the voice probability smaller than the required probability in the voice signal is normalized into a mute frame, the voice signal is encoded in a variable length coding mode to obtain an audio coding stream, the audio coding stream is sent to a server, the server comprises a translation model, and the audio content corresponding to the audio coding stream is translated in real time through the translation model;

thirdly, in a real-time sound changing scene, a user inputs voice through a terminal microphone, after the terminal acquires a voice signal input by the user, the voice signal is subjected to voice detection, a voice frame with a voice probability smaller than a required probability in the voice signal is normalized into a mute frame, the voice signal is coded in a variable length coding mode to obtain an audio coding stream, the audio coding stream is sent to a server, the server comprises a sound changing model, and real-time sound changing is performed on audio content corresponding to the audio coding stream through the sound changing model.

It should be noted that, in the above application scenarios, the method for sending a speech signal provided in the embodiment of the present application is applied to a real-time speech dialog scenario, a real-time translation scenario, and a real-time sound-changing scenario as an example, and in an actual application process, the method for sending a speech signal may also be applied to other scenarios in which a human-sound frame with a low human-sound probability is normalized to a mute frame and then coded by using a variable length coding method, which is not limited in the embodiment of the present application.

Schematically, taking the method for sending a voice signal provided in the embodiment of the present application as an example of being applied to a real-time voice conversation scene, an implementation environment of the embodiment of the present application is schematically described, as shown in fig. 1, the implementation environment includes: a transmitting terminal 310, a server 320, and a receiving terminal 330;

wherein, a real-time voice connection is established between the sending terminal 310 and the receiving terminal 330 through the server 320, the sending terminal 310 collects a voice signal and encodes the voice signal to generate an audio encoded stream, after the sending terminal 310 sends the audio encoded stream to the server 320, the server 320 sends the audio encoded stream to the receiving terminal 330 according to the real-time voice connection established between the sending terminal 310 and the receiving terminal 330, optionally, the sending terminal 310 first pre-processes the voice signal and performs voice detection on the voice frame in the voice signal, filters the environment frame in the voice frame according to the voice detection result, normalizes the voice frame with the voice probability less than the required probability into a mute frame, then encodes the filtered and normalized voice signal in a variable length coding manner to obtain the audio encoded stream, the audio encoded stream is sent to the server 320.

The server 320 is configured to set a preset parameter, such as a parameter applied in an encoding process, such as a trailing parameter, and send the preset parameter to the sending terminal 310.

After receiving the audio encoded stream sent by the server 320, the receiving terminal 330 decodes the audio encoded stream, and then plays the decoded audio content. Alternatively, the receiving terminal 330 may also be implemented as the transmitting terminal 310 that transmits the audio encoded stream, and similarly, the transmitting terminal 310 may also be limited to the receiving terminal 330 that receives the audio encoded stream.

Optionally, the receiving terminal 330 for receiving the audio encoded stream may be implemented as one terminal device, or may be implemented as multiple terminal devices, which is not limited in this embodiment of the application.

The transmitting terminal 310 and the receiving terminal 330 may be implemented as mobile terminals such as a mobile phone, a tablet computer, a portable laptop computer, and a wearable device, and also begin to be implemented as terminals such as a desktop computer.

The server 320 may be implemented as one server, or may be implemented as a server cluster formed by a plurality of servers, and the server 320 may be implemented as a physical server, or may be implemented as a cloud server, which is not limited in this embodiment of the present application.

With reference to the application scenario and the implementation environment, a method for transmitting a voice signal according to an embodiment of the present application is described, where fig. 4 is a flowchart of a method for transmitting a voice signal according to an exemplary embodiment of the present application, and is described by taking as an example that the method is applied to a transmitting terminal 310 shown in fig. 3, as shown in fig. 4, the method includes:

step 401, performing voice probability detection on the voice frame in the target voice signal to obtain a voice frame.

Optionally, the target voice signal is a voice signal acquired by the sending terminal through a microphone, and optionally, the microphone may be a microphone carried by the sending terminal, or may be a microphone externally connected to the sending terminal, such as: the method comprises the following steps that microphone equipment externally connected with a sending terminal or earphone equipment externally connected with the sending terminal is attached with a microphone; optionally, the target voice signal may also be a voice signal acquired by the sending terminal in a downloading manner.

Optionally, the target voice signal is a voice signal to be sent to a receiving terminal in the sending terminal, or the target voice signal is a voice signal to be sent to a server in the sending terminal.

When the target voice signal is a voice signal to be sent to a receiving terminal in a sending terminal, a real-time voice connection is established between the sending terminal and the receiving terminal, the real-time voice connection can be directly established between the sending terminal and the receiving terminal, or the real-time voice connection can be established through a server, optionally, a real-time voice call is in progress between the sending terminal and the receiving terminal, the real-time voice call can be a one-way call, namely, the sending terminal sends the voice signal to the receiving terminal in a one-way mode, or can be a two-way call, namely, the sending terminal can send the voice signal to the receiving terminal, and can also receive the voice signal sent by the receiving terminal.

When the target voice signal is implemented as a voice signal to be sent to the server in the sending terminal, the sending terminal is applying a real-time voice processing function provided by the server, such as: the system comprises a real-time voice translation function, a real-time voice sound changing function, a real-time voice optimization function and the like, wherein after the sending terminal sends the target voice signal to the server, the server carries out corresponding real-time processing on the target voice to obtain a processing result and feeds the processing result back to the sending terminal.

Optionally, when performing voice detection on a voice frame in the target voice signal, first determining a preset frequency band correspondingly covered by voice in the voice detection, and determining the voice probability of the voice frame according to the proportion of the voice frame in the preset frequency band.

Optionally, the preset frequency band covered by the human voice is determined according to a frequency range generally corresponding to the human voice when speaking, such as: the bass frequency range is typically 82Hz to 392Hz, and the reference range is 64Hz to 523 Hz; the frequency range of the male midrange is usually 123Hz to 493Hz, and the frequency range of the male treble is usually 164Hz to 698 Hz; the female bass frequency range is typically 123Hz to 493Hz, and the female treble frequency range is typically 220Hz to 1100 Hz. And determining the voice probability of the voice frame according to the proportion of the signals belonging to the preset frequency band in the voice frame to the whole voice frame on the assumption that the preset frequency band correspondingly covered by the voice is 80Hz to 1000 Hz.

Optionally, the voice probability is calculated by a gaussian mixture model algorithm, wherein the gaussian mixture model quantizes the voice frame by a gaussian probability density function (normal distribution curve), decomposes the voice frame into a plurality of models formed based on the gaussian probability density function, and thereby performs voice probability detection on the voice frame. Optionally, after filtering the ambient sound frame in the target speech signal through the human voice probability detection, the human voice frame in the target speech signal is retained.

Step 402, obtaining a first human voice frame and a second human voice frame, wherein the human voice probability of the first human voice frame is greater than or equal to a first requirement probability, and the human voice probability of the second human voice frame is less than the first requirement probability.

Optionally, when the probability of human voice of the speech frame reaches the first probability requirement, the speech frame is determined as a first human voice frame, and when the probability of human voice of the speech frame is smaller than the first probability requirement, the speech frame is determined as a second human voice frame. The second human voice frame belongs to a voice frame generated by voice sent by a person, the second human voice frame is a part of voice content, the probability of human voice of the second human voice frame is low, the second human voice frame is possible to pause, language word, human voice tail, and the like, the mute processing of the second human voice frame does not affect the intelligibility of the voice, and when the second human voice frame is encoded in the form of a human voice frame, the length of an encoding result is greater than that of the mute frame, and the transmission process occupies bandwidth, so the human voice frame is divided into a first human voice frame affecting intelligibility and a second human voice frame not affecting intelligibility according to the first probability requirement.

Step 403, the second voice frame is normalized to a mute frame.

Optionally, for the second human voice frame with the human voice probability smaller than the first probability requirement, the second human voice frame is normalized into a mute frame. Optionally, the second vocal frame is normalized to a mute frame by modification of the digital signal value. The normalization of the second voice frame into the mute frame means that the second voice frame is correspondingly processed according to the existence form of the mute frame in the target voice signal, and schematically, the second voice frame is correspondingly processed according to the digital signal value of the mute frame in the target voice signal, such as: and adjusting the digital signal value of the second human voice frame into a digital signal value corresponding to the mute frame.

Optionally, the first digital signal value of the second human voice frame is modified to a second digital signal value corresponding to the mute frame. Optionally, if the target speech signal includes n frames of second frames of voice, the first digital signal values corresponding to the n frames of second frames of voice are modified into the second digital signal value corresponding to the silence frame.

Optionally, since the second human voice frame belongs to a voice frame generated by a voice emitted by a human, the second human voice frame is a part of the voice content, and since the probability of the human voice of the second human voice frame is low, there is a possibility that the second human voice frame is a pause, a word, a human voice trailer, and the like, the muting processing of the second human voice frame does not affect the intelligibility of the voice, and when the second human voice frame is encoded in the form of a human voice frame, the length of the encoding result is greater than the length of the encoding result of the mute frame, and the bandwidth is occupied in the transmission process, so the second human voice frame is normalized to be a mute frame, and on the premise that the intelligibility is not affected, the target voice signal is encoded by using a smaller length of the encoding result as the encoding requirement, and the bandwidth occupied in the transmission process is reduced.

Step 404, performing side length coding on the first human voice frame and the mute frame to obtain an audio coding stream, wherein a first coding length of the mute frame in a variable length coding mode is smaller than a second coding length of the second human voice frame.

Alternatively, the variable length coding scheme is a coding scheme in which the coding length of the speech frame is not fixed. Optionally, when the encoding is performed by a variable length encoding method, the length of the encoded encoding result is different for speech frames with different number of bytes of the intra-frame signal. Generally, after the second human voice frame (the human voice frame with low human voice probability) is coded in the variable length coding mode, the length of the coded result is about 30 bytes; after the mute frame is encoded by a variable length coding method, the length of the encoding result is about 9 bytes. That is, the first encoding length of the mute frame is smaller than the second encoding length of the second human voice frame.

In the embodiment of the present application, the variable length coding method is described by taking as an example that the coding is performed by using the OPUS coding format, and the standard format of the OPUS coding format is RFC 6717.

Step 405, transmitting the audio encoded stream.

Optionally, when the audio encoded stream is transmitted, the audio encoded stream is packetized according to the application layer requirement and then transmitted.

Optionally, when the sending terminal sends the audio encoded stream, the sending terminal sends the audio encoded stream to a corresponding device according to a function currently applied by the sending terminal.

Illustratively, the sending terminal sends the audio coding stream to the receiving terminal, and a real-time voice connection is established between the sending terminal and the receiving terminal; or, the sending terminal sends the audio coding stream to a server, and the server comprises a real-time translation model used for real-time translation of the audio content corresponding to the audio coding stream; or, the sending terminal sends the audio coding stream to a server, and the server comprises a real-time sound change model used for carrying out sound change processing on the audio content corresponding to the audio coding stream. The three modes are explained separately:

firstly, a sending terminal sends an audio coding stream to a receiving terminal;

the real-time voice connection can be directly established between the sending terminal and the receiving terminal, the real-time voice connection can also be established through the server, when the receiving terminal receives the audio coding stream sent by the sending terminal, the audio coding stream is decoded to obtain an audio decoding stream, and the receiving terminal plays the audio decoding stream, namely plays the voice content collected by the sending terminal;

referring to fig. 5, schematically, the receiving process of the receiving terminal is shown in fig. 5, and includes:

and step 501, initializing. Namely, the system thread is started, the memory pool resource is applied, the receiving terminal enters the corresponding room according to the service team formation information, and the system configuration information is pulled. In step 502, the receiving terminal performs network packet reception from the cloud server 510. Optionally, the network packet receiving process is a loop packet receiving process, and the network packet is received from a cloud server that forwards the audio encoded stream. And step 503, unpacking. Optionally, the protocol related to the service application is parsed, the service layer load is removed, and the voice load, i.e. the audio encoded stream, is retained. And step 504, buffering. Optionally, the voice load is saved for a certain time period, i.e. by buffering. And step 505, decoding. That is, the receiving terminal decompresses by using the decoder corresponding to the voice load to obtain the audio decoding stream, that is, the voice original stream of the audio coding stream before the coding. And step 506, playing. Optionally, the original voice code stream is transmitted to a play buffer of the receiving terminal for playing, if an end instruction is not received, step 502 is continuously executed for network packet receiving, and a system end signaling is received, step 507 is executed, and the related resources are recovered.

Secondly, the sending terminal sends the audio coding stream to a server, and the server comprises a real-time translation model;

optionally, the sending terminal is applying a real-time translation function provided by the server, the server includes a real-time translation model, and after the sending terminal sends the audio encoded stream to the server, the server decodes the audio encoded stream and translates the decoded speech content. Optionally, the server may feed back the translation result to the sending terminal, or may send the translation result to the receiving terminal in the form of a text or send the translation result to the receiving terminal in the form of a voice according to a real-time voice connection established between the sending terminal and the receiving terminal through the server.

And thirdly, the sending terminal sends the audio coding stream to a server, and the server comprises a real-time sound changing model.

Optionally, the sending terminal is applying a real-time sound changing function provided by the server, the server includes a real-time sound changing model, and after the sending terminal sends the audio encoded stream to the server, the server decodes the audio encoded stream and changes sound of the decoded voice content. Optionally, the server may feed back the change-of-voice result to the sending terminal, or may send the change-of-voice result to the receiving terminal according to a real-time voice connection established between the sending terminal and the receiving terminal through the server, or send the change-of-voice result to the receiving terminal and the sending terminal, which is not limited in the embodiment of the present application.

In summary, in the method provided in this embodiment, before the target speech signal is encoded, the second speech frame with the lower probability of speech in the target speech signal is normalized, the second speech frame is normalized into the silence frame, and the target speech signal is encoded in a variable length coding manner, where the length of the coding result of the silence frame is smaller than the length of the coding result of the second speech frame, so that on the basis of not affecting speech intelligibility, the coding length of the target speech signal is reduced, the bandwidth occupation of the target speech signal in the transmission process is reduced, the problem of longer delay in the speech transmission process is avoided for a real-time speech scene, and the transmission efficiency of the target speech signal is improved.

In an optional embodiment, after performing the voice detection on the target speech signal, it is further required to filter an ambient sound frame, fig. 6 is a flowchart of a speech signal transmission method according to another exemplary embodiment of the present application, which is described by taking as an example that the method is applied to the transmitting terminal 310 shown in fig. 3, and as shown in fig. 6, the method includes:

step 601, obtaining a target voice signal to be sent.

Optionally, the target voice signal is a voice signal to be sent to a receiving terminal in the sending terminal, or the target voice signal is a voice signal to be sent to a server in the sending terminal.

When the target voice signal is a voice signal to be sent to a receiving terminal in the sending terminal, a real-time voice connection is established between the sending terminal and the receiving terminal.

When the target voice signal is implemented as a voice signal to be sent to the server in the sending terminal, the sending terminal is applying a real-time voice processing function provided by the server, such as: a real-time voice translation function, a real-time voice sound changing function, a real-time voice optimization function and the like.

Step 602, performing voice probability detection on the voice frame in the target voice signal to obtain the voice probability of the voice frame.

Optionally, when performing voice detection on a voice frame in the target voice signal, first determining a preset frequency band correspondingly covered by voice in the voice detection, and determining the voice probability of the voice frame according to the proportion of the voice frame in the preset frequency band.

Optionally, the voice probability is calculated by a gaussian mixture model algorithm, wherein the gaussian mixture model quantizes the voice frame by a gaussian probability density function (normal distribution curve), decomposes the voice frame into a plurality of models formed based on the gaussian probability density function, and thereby performs voice probability detection on the voice frame.

Step 603, when the voice probability of the voice frame is smaller than the second required probability, determining that the voice frame is an environmental voice frame.

Optionally, when the voice probability of the voice frame reaches the second required probability, determining that the voice frame is a human voice frame, and determining whether the human voice frame is a human voice frame with a higher voice probability or a human voice frame with a lower voice probability according to the first required probability of the voice probability.

Optionally, the probability value of the second requirement probability is smaller than the first requirement probability.

And step 604, discarding the environmental sound frame when the trailing duration of the environmental sound frame reaches the target duration.

Optionally, the hangover duration is used to indicate a duration of an environmental sound, that is, when it is determined that one frame of speech frame is an environmental sound frame, performing voice detection on the speech frame in the hangover duration after the environmental sound frame, and when all the speech frames in the hangover duration are environmental sound frames, discarding the environmental sound frame.

Step 605, determining the speech frames that are not discarded as human frames.

And 606, obtaining a first human voice frame and a second human voice frame, wherein the human voice probability of the first human voice frame is greater than or equal to the first requirement probability, and the human voice probability of the second human voice frame is less than the first requirement probability.

Optionally, when the probability of human voice of the speech frame reaches the first probability requirement, the speech frame is determined as a first human voice frame, and when the probability of human voice of the speech frame is smaller than the first probability requirement, the speech frame is determined as a second human voice frame.

Step 607, the second voice frame is normalized to a mute frame.

Optionally, the first digital signal value of the second human voice frame is modified to a second digital signal value corresponding to the mute frame. Optionally, if the target speech signal includes n frames of second frames of voice, the first digital signal values corresponding to the n frames of second frames of voice are modified into the second digital signal value corresponding to the silence frame.

Step 608, performing side length coding on the first human voice frame and the mute frame to obtain an audio coded stream, where a first coding length of the mute frame in a variable length coding mode is smaller than a second coding length of the second human voice frame.

Alternatively, the variable length coding scheme is a coding scheme in which the length of the coding result of the speech frame is not fixed. Optionally, when the coding is performed by a variable length coding method, the lengths of the coded coding results of the speech frames with different number of bytes of the intra-frame signal are also different, and the length of the first coding result of the mute frame is smaller than the length of the second coding result of the second human voice frame.

Step 608, the audio encoded stream is transmitted.

Optionally, when the sending terminal sends the audio encoded stream, the sending terminal sends the audio encoded stream to a corresponding device according to a function currently applied by the sending terminal.

Illustratively, the sending terminal sends the audio coding stream to the receiving terminal, and a real-time voice connection is established between the sending terminal and the receiving terminal; or, the sending terminal sends the audio coding stream to a server, and the server comprises a real-time translation model used for real-time translation of the audio content corresponding to the audio coding stream; or, the sending terminal sends the audio coding stream to a server, and the server comprises a real-time sound change model used for carrying out sound change processing on the audio content corresponding to the audio coding stream.

In summary, in the method provided in this embodiment, before the target speech signal is encoded, the second speech frame with the lower probability of speech in the target speech signal is normalized, and the second speech frame is normalized to be the silence frame, so that on the basis of not affecting speech intelligibility, the encoding length of the target speech signal is reduced, the bandwidth occupation of the target speech signal in the transmission process is reduced, the problem of longer delay in the speech transmission process is avoided for a real-time speech scene, and the transmission efficiency of the target speech signal is improved.

According to the method provided by the embodiment, after the voice frame in the target voice signal is subjected to voice detection, the environmental voice frame is firstly determined and discarded, so that the voice frame with low voice probability is normalized into the mute frame, and the intelligibility of the target voice signal is not influenced because the environmental voice frame does not need to be sent, so that after the environmental voice frame is discarded, the bandwidth occupation of the target voice signal in the sending process is reduced, and the transmission efficiency of the target voice signal is improved.

In an alternative embodiment, the target speech signal needs to be preprocessed before performing the human voice detection, fig. 7 is a flowchart of a speech signal transmission method according to another exemplary embodiment of the present application, which is described by taking as an example that the method is applied to the transmitting terminal 310 shown in fig. 3, and as shown in fig. 7, the method includes:

step 701, obtaining a target voice signal to be sent.

Optionally, the target voice signal is a voice signal to be sent to a receiving terminal in the sending terminal, or the target voice signal is a voice signal to be sent to a server in the sending terminal.

Step 702, pre-processing the target voice signal.

Optionally, the preprocessing includes at least one of resampling processing, noise reduction processing, howling suppression processing, and echo cancellation processing.

The resampling processing comprises at least one of up-resampling processing and down-resampling processing, wherein during the up-resampling processing, the difference processing is carried out on the target voice signal, and during the down-resampling processing, the extraction processing is carried out on the target voice signal; the noise reduction processing is a processing mode of eliminating a noise part in a target voice signal; the howling suppression processing is to eliminate the howling condition appearing in the target voice signal, and the howling suppression can be performed in a mode of eliminating the howling by adjusting the frequency response of the system to an approximate straight line and enabling the gains of all frequencies to be basically consistent by adopting a frequency equalization method; the Echo Cancellation process may be implemented by Echo Cancellation (EC) technology, where echoes are divided into Acoustic Echo (Acoustic Echo) and Line Echo (Line Echo), and the corresponding Echo Cancellation technology corresponds to Acoustic Echo Cancellation (AEC) and Line Echo Cancellation (LEC).

Step 703, performing voice probability detection on the voice frame in the target voice signal to obtain the voice probability of the voice frame.

Optionally, the voice probability is calculated by a gaussian mixture model algorithm, wherein the gaussian mixture model quantizes the voice frame by a gaussian probability density function (normal distribution curve), decomposes the voice frame into a plurality of models formed based on the gaussian probability density function, and thereby performs voice probability detection on the voice frame.

Step 704, when the voice probability of the voice frame is smaller than the second required probability, determining that the voice frame is an environmental voice frame.

Optionally, when the voice probability of the voice frame reaches the second required probability, determining that the voice frame is a human voice frame, and determining whether the human voice frame is a human voice frame with a higher voice probability or a human voice frame with a lower voice probability according to the first required probability of the voice probability.

Step 705, when the trailing duration of the environmental sound frame reaches the target duration, discarding the environmental sound frame.

Optionally, the hangover duration is used to indicate a duration of an environmental sound, that is, when it is determined that one frame of speech frame is an environmental sound frame, performing voice detection on the speech frame in the hangover duration after the environmental sound frame, and when all the speech frames in the hangover duration are environmental sound frames, discarding the environmental sound frame.

Step 706, determine the speech frames that are not discarded as human frames.

And 707, obtaining a first human voice frame and a second human voice frame, where the human voice probability of the first human voice frame is greater than or equal to the first requirement probability, and the human voice probability of the second human voice frame is less than the first requirement probability.

Optionally, when the probability of human voice of the speech frame reaches the first probability requirement, the speech frame is determined as a first human voice frame, and when the probability of human voice of the speech frame is smaller than the first probability requirement, the speech frame is determined as a second human voice frame.

At step 708, the second voice frame is normalized to a silence frame.

Optionally, the first digital signal value of the second human voice frame is modified to a second digital signal value corresponding to the mute frame.

And 709, performing side length coding on the first human voice frame and the mute frame to obtain an audio coding stream, wherein a first coding length of the mute frame in a variable length coding mode is smaller than a second coding length of the second human voice frame.

Alternatively, the variable length coding scheme is a coding scheme in which the length of the coding result of the speech frame is not fixed. Optionally, when the coding is performed by a variable length coding method, the lengths of the coded coding results of the speech frames with different number of bytes of the intra-frame signal are also different, and the length of the first coding result of the mute frame is smaller than the length of the second coding result of the second human voice frame.

Step 710, transmitting the audio encoded stream.

Optionally, when the sending terminal sends the audio encoded stream, the sending terminal sends the audio encoded stream to a corresponding device according to a function currently applied by the sending terminal.

In summary, in the method provided in this embodiment, before the target speech signal is encoded, the second speech frame with the lower probability of speech in the target speech signal is normalized, and the second speech frame is normalized to be the silence frame, so that on the basis of not affecting speech intelligibility, the encoding length of the target speech signal is reduced, the bandwidth occupation of the target speech signal in the transmission process is reduced, the problem of longer delay in the speech transmission process is avoided for a real-time speech scene, and the transmission efficiency of the target speech signal is improved.

According to the method provided by the embodiment, before the target voice signal is subjected to voice detection, the target voice signal is preprocessed, and the preprocessed target voice signal is subjected to voice detection.

Fig. 8 is an overall flowchart of a transmitting terminal transmitting a target voice signal according to an exemplary embodiment of the present application, and as shown in fig. 8, the process includes:

step 801, initialization. Optionally, the system thread is started, the memory pool resource is applied, the user of the sending terminal enters the corresponding room according to the service team formation information, and the system pulls the configuration information. Step 802, voice signal acquisition. Optionally, the sending terminal turns on a microphone and collects the original digital signal of the voice through the microphone. And step 803, preprocessing. Optionally, the preprocessing includes at least one of resampling processing, noise reduction processing, howling suppression processing, and echo cancellation processing. And step 804, detecting the human voice probability. Optionally, based on the probability detection of the human voice by the gaussian mixture model, the proportion of the voice frame is analyzed in each frequency band covered by the human voice. Step 805, determine whether the voice frame is voiced. Optionally, when the human voice probability is lower than a threshold value Frate _ min and the continuous occurrence exceeds the trailing duration, determining the speech frame as an environmental sound frame, otherwise, determining the speech frame as a human voice frame. In step 806, when the speech frame is an ambient sound frame, the ambient sound frame is discarded. Optionally, the environmental sound frame is not encoded, so that bandwidth overhead and bandwidth cost are reduced. In step 807, when the speech frame is a human voice frame, it is determined whether the human voice probability is low. And 808, when the human voice frame is the low-probability human voice frame, normalizing the low-probability human voice frame to a mute frame. Optionally, for the voice Probability value Probasic _ Min (which is configured by the server and obtained at the time of initialization), normalization is performed to a mute frame. And step 809, performing variable length coding on the non-low probability human voice frames and the mute frames. And step 810, packaging and transmitting. Step 811, the data packet is sent to the cloud server. Optionally, according to the requirement of an application layer, related protocols are packed, and the voice target code stream is transmitted to the cloud data forwarding server.

Fig. 9 is a transmitting apparatus of a voice signal according to an exemplary embodiment of the present application, and is described by taking as an example that the apparatus is applied to the transmitting terminal shown in fig. 3, and as shown in fig. 9, the apparatus includes: a determination module 910, a processing module 920, an encoding module 930, and a transmission module 940;

a determining module 910, configured to perform voice probability detection on a voice frame in a target voice signal to obtain a voice frame;

the determining module 910 is further configured to obtain a first human voice frame and a second human voice frame, where a human voice probability of the first human voice frame is greater than or equal to a first requirement probability, and a human voice probability of the second human voice frame is less than the first requirement probability;

a processing module 920, configured to normalize the second voice frame to a mute frame;

a coding module 930, configured to perform variable length coding on the first person audio frame and the silence frame to obtain an audio coding stream; the first coding length of the mute frame in a variable length coding mode is smaller than the second coding length of the second human voice frame;

a sending module 940, configured to send the audio encoded stream.

In an optional embodiment, the processing module 920 is further configured to normalize the second voice frame to the silence frame in the target voice signal by modifying a digital signal value.

In an optional embodiment, the processing module 920 is further configured to modify the first digital signal value of the second human voice frame into a second digital signal value corresponding to the mute frame.

In an optional embodiment, the determining module 910 is further configured to determine a preset frequency band covered by a human voice in the human voice detection; and determining the voice probability of the voice frame according to the ratio of the voice frame in the preset frequency band.

In an optional embodiment, the determining module 910 is further configured to determine that the speech frame is an ambient sound frame when the vocal probability of the speech frame is smaller than a second required probability; when the trailing duration of the environmental sound frame reaches the target duration, discarding the environmental sound frame; determining the speech frames that are not dropped as the human voice frames.

In an optional embodiment, the processing module 920 is further configured to pre-process the target speech signal;

the preprocessing mode comprises at least one of resampling processing, noise reduction processing, howling suppression processing and echo cancellation processing.

In an optional embodiment, the sending module 940 is further configured to send the audio encoded stream to a receiving terminal, where a real-time voice connection is established between the sending terminal and the receiving terminal;

or the like, or, alternatively,

the sending module 940 is further configured to send the audio coding stream to a server, where the server includes a real-time translation model and is configured to translate audio content corresponding to the audio coding stream in real time;

or the like, or, alternatively,

the sending module 940 is further configured to send the audio coding stream to a server, where the server includes a real-time sound-changing model and is configured to perform sound-changing processing on the audio content corresponding to the audio coding stream.

In summary, the transmitting apparatus for speech signals provided in this embodiment, before encoding a target speech signal, performs normalization processing on a second human frame with a low probability of human voice in the target speech signal, and normalizes the second human frame into a silence frame, so as to reduce the encoding length of the target speech signal and reduce the bandwidth occupation of the target speech signal in the transmitting process on the basis of not affecting speech intelligibility, thereby avoiding the problem of large delay in the speech transmitting process for a real-time speech scene, and improving the transmission efficiency of the target speech signal.

It should be noted that: the voice signal transmitting apparatus provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the transmitting apparatus for a voice signal provided in the foregoing embodiment and the transmitting method embodiment for a voice signal belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.

Fig. 10 shows a block diagram of a terminal 1000 according to an exemplary embodiment of the present invention. The terminal 1000 can be: a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer iv, motion video Experts compression standard Audio Layer 4) player, a notebook computer or a desktop computer. Terminal 1000 can also be referred to as user equipment, portable terminal, laptop terminal, desktop terminal, or the like by other names.

In general, terminal 1000 can include: a processor 1001 and a memory 1002, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by the processor to implement the method of transmitting a voice signal as in the embodiments of the present application described above.

Optionally, the terminal 1000 further comprises a microphone 1003, and the microphone 1003 is used for collecting the target voice signal.

Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes Random Access Memory (RAM), Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, Digital Versatile Disks (DVD), or other optical, magnetic, or other magnetic storage devices.

In another aspect, a computer device is provided, which includes a processor and a memory, wherein at least one instruction, at least one program, code set, or instruction set is stored in the memory, and the at least one instruction, the at least one program, code set, or instruction set is loaded and executed by the processor to implement the method for transmitting the voice signal as in the embodiments of the present application.

In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the method for transmitting a voice signal as in the embodiments of the present application.

In another aspect, a computer program product is provided, which when run on a computer, causes the computer to execute the method of transmitting a speech signal as in the embodiments of the present application described above.

It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种音频信号处理的方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类