Audio processing method and device and electronic equipment

文档序号:172998 发布日期:2021-10-29 浏览:33次 中文

阅读说明:本技术 音频处理方法、装置及电子设备 (Audio processing method and device and electronic equipment ) 是由 郑宁杰 李伟 于 2020-04-29 设计创作,主要内容包括:本申请提供一种音频处理方法、装置及电子设备,该方法包括:接收电子设备的麦克风识别到的第一语音信息,第一语音信息为第一语种对应的语音信息;将第一语音信息转换成与第二语种对应的第二语音信息;控制电子设备的定向音频器件向目标方位输出第二语音信息对应的语音,以使位于目标方位的目标用户听取第二语音信息对应的语音。这样通过利用电子设备的定向音频器件的定向传播功能,使用不同语言的用户可以通过具有定向音频器件的电子设备完成语音信息的传递,实现面对面的交流,且由于定向音频器件的所具有的定向传播功能,还不用担心定向音频器件输出的语音会被除目标用户之外的其他用户听到,保证了聊天内容的私密性。(The application provides an audio processing method, an audio processing device and electronic equipment, wherein the method comprises the following steps: receiving first voice information identified by a microphone of the electronic equipment, wherein the first voice information is voice information corresponding to a first language; converting the first voice information into second voice information corresponding to the second language; and controlling a directional audio device of the electronic equipment to output the voice corresponding to the second voice information to the target direction so that a target user positioned at the target direction can listen to the voice corresponding to the second voice information. Therefore, by utilizing the directional transmission function of the directional audio device of the electronic equipment, users using different languages can complete the transmission of voice information through the electronic equipment with the directional audio device, so that face-to-face communication is realized, and due to the directional transmission function of the directional audio device, the users do not need to worry about that the voice output by the directional audio device can be heard by other users except the target user, so that the privacy of chat contents is ensured.)

1. An audio processing method applied to an electronic device, the method comprising:

receiving first voice information identified by a microphone of the electronic equipment, wherein the first voice information is voice information corresponding to a first language;

converting the first voice information into second voice information corresponding to a second language;

and controlling a directional audio device of the electronic equipment to output the voice corresponding to the second voice information to a target direction so that a target user positioned at the target direction can listen to the voice corresponding to the second voice information.

2. The method of claim 1, wherein the electronic device comprises at least two directional audio devices, each directional audio device having a different sound emission direction;

the controlling the directional audio device of the electronic equipment to output the voice corresponding to the second voice information towards the target direction includes:

determining a target directional audio device corresponding to the second language from the at least two directional audio devices;

and controlling the target directional audio device to output the voice corresponding to the second voice information towards the target direction.

3. The method of claim 2, wherein prior to determining the target directional audio device corresponding to the second language from the at least two directional audio devices, the method further comprises:

pre-establishing a first incidence relation between a first directional audio device and the first language and a second incidence relation between a second directional audio device and the second language in the at least two directional audio devices;

the determining a target directional audio device corresponding to the second language from the at least two directional audio devices comprises:

and determining the second directional audio device as a target directional audio device corresponding to the second language based on the second association relation.

4. The method of claim 1, wherein the controlling a directional audio device of the electronic device to output speech corresponding to the second speech information toward a target location comprises:

controlling the directional audio device to rotate so that the sounding direction of the directional audio device faces to the target direction;

and controlling the directional audio device to output the voice corresponding to the second voice information.

5. The method according to claim 1, wherein after receiving first voice information recognized by a microphone of the electronic device, the first voice information corresponding to a first language, the method further comprises:

identifying azimuth information of a target object outputting the first voice information;

and recording and storing the azimuth information of the target object.

6. An audio processing apparatus, comprising:

the receiving module is used for receiving first voice information identified by a microphone of the electronic equipment, wherein the first voice information is voice information corresponding to a first language;

the conversion module is used for converting the first voice information into second voice information corresponding to a second language;

and the output module is used for controlling a directional audio device of the electronic equipment to output the voice corresponding to the second voice information to a target direction so that a target user positioned at the target direction can listen to the voice corresponding to the second voice information.

7. The audio processing apparatus according to claim 6, wherein the electronic device comprises at least two directional audio devices, each directional audio device having a different sound emission direction;

the output module includes:

a determining unit, configured to determine a target directional audio device corresponding to the second language from the at least two directional audio devices;

and the first output unit is used for controlling the target directional audio device to output the voice corresponding to the second voice information towards the target direction.

8. The audio processing device according to claim 7, characterized in that the audio processing device further comprises:

the establishing module is used for pre-establishing a first incidence relation between a first directional audio device and the first language and a second incidence relation between a second directional audio device and the second language in the at least two directional audio devices;

the determining unit is configured to determine, based on the second association relationship, the second directional audio device as a target directional audio device corresponding to the second language.

9. The audio processing apparatus according to claim 6, wherein the output module comprises:

the rotating unit is used for controlling the directional audio device to rotate so that the sounding direction of the directional audio device faces to the target direction;

and the second output unit is used for controlling the directional audio device to output the voice corresponding to the second voice information.

10. The audio processing device according to claim 6, characterized in that the audio processing device further comprises:

the recognition module is used for recognizing the azimuth information of the target object which outputs the first voice information;

and the record storage module is used for recording and storing the azimuth information of the target object.

11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the audio processing method of any of claims 1 to 5.

12. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the audio processing method according to any one of claims 1 to 5.

Technical Field

The present application relates to the field of communications technologies, and in particular, to an audio processing method and apparatus, and an electronic device.

Background

At present, people using different languages generally wear earphone devices to realize translation conversion of languages and prevent translated voices from being heard by other people when communicating. However, after the user wears the headphone device, although the user can hear the other party, the user's ear is isolated from the outside, which results in poor perception of the surrounding environment. Therefore, the existing earphone translation equipment has the problem of poor conversation effect.

Disclosure of Invention

The embodiment of the application provides an audio processing method, an audio processing device and electronic equipment, and can solve the problem that existing earphone translation equipment is poor in conversation effect.

In order to solve the above technical problem, the present application is implemented as follows:

in a first aspect, an embodiment of the present application provides an audio processing method, which is applied to an electronic device, and the method includes:

receiving first voice information identified by a microphone of the electronic equipment, wherein the first voice information is voice information corresponding to a first language;

converting the first voice information into second voice information corresponding to a second language;

and controlling a directional audio device of the electronic equipment to output the voice corresponding to the second voice information to a target direction so that a target user positioned at the target direction can listen to the voice corresponding to the second voice information.

In a second aspect, an embodiment of the present application further provides an audio processing apparatus, including:

the receiving module is used for receiving first voice information identified by a microphone of the electronic equipment, wherein the first voice information is voice information corresponding to a first language;

the conversion module is used for converting the first voice information into second voice information corresponding to a second language;

and the output module is used for controlling a directional audio device of the electronic equipment to output the voice corresponding to the second voice information to a target direction so that a target user positioned at the target direction can listen to the voice corresponding to the second voice information.

In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.

In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.

In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.

In the embodiment of the application, the first voice information corresponding to the first language is converted into the second voice information corresponding to the second language, and the voice corresponding to the second voice information is output to the target direction through the directional audio device of the electronic device, so that the target user located at the target direction can hear the voice corresponding to the second voice information, thereby realizing directional transmission of the voice, that is, realizing directional transmission of the converted voice corresponding to the second voice information to the target user. Therefore, by utilizing the directional transmission function of the directional audio device of the electronic equipment, users using different languages can complete the transmission of voice information through the electronic equipment with the directional audio device, so that face-to-face communication is realized, and due to the directional transmission function of the directional audio device, the users do not need to worry about that the voice output by the directional audio device can be heard by other users except the target user, so that the privacy of chat contents is ensured.

Drawings

Fig. 1 is a flowchart of an audio processing method provided by an embodiment of the present application;

FIG. 2 is a schematic diagram of a chat scenario provided by an embodiment of the present application;

fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;

fig. 4 is a circuit diagram of a driving circuit of a directional audio device provided by an embodiment of the present application;

fig. 5 is a second schematic diagram of a chat scenario provided by the embodiment of the present application;

fig. 6 is a block diagram of an audio processing apparatus according to an embodiment of the present application;

fig. 7 is a block diagram of an electronic device provided in an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.

The embodiment of the application provides an audio processing method, which can be applied to electronic equipment, wherein the electronic equipment comprises a directional audio device, and the directional audio device can realize directional propagation of audible sound.

The frequency range of human ear audible sound is 20 Hz-20 KHz, and the sound wave of the audible sound frequency does not have definite directivity in the air, so that the sound can be heard by many people. While sound waves with frequencies above 20KHz are called ultrasound waves and are inaudible to the human ear. The audio frequency directional technology is to utilize the nonlinear acoustic theory to realize the directional propagation of audible sound, and demodulate difference frequency waves, sum frequency waves and the like in a medium by utilizing the nonlinear characteristics of sound waves and enabling two high-frequency ultrasonic waves propagating along the same direction. By using the audio frequency directional technology, the generated difference frequency wave is audible by human ears, and the sound can be spread to a designated area, but the sound can not be heard by other areas, so that the sound spread has high directivity.

As shown in fig. 1, an audio processing method provided in an embodiment of the present application includes the following steps:

step 101, receiving first voice information recognized by a microphone of the electronic device, where the first voice information is voice information corresponding to the first language.

And 102, converting the first voice information into second voice information corresponding to a second language.

And 103, controlling a directional audio device of the electronic equipment to output the voice corresponding to the second voice information to a target direction, so that a target user positioned at the target direction can listen to the voice corresponding to the second voice information.

In this embodiment, the first voice information corresponding to the first language is converted into the second voice information corresponding to the second language, and the voice corresponding to the second voice information is output to the target direction through the directional audio device of the electronic device, so that the target user at the target direction hears the voice corresponding to the second voice information, thereby implementing directional transmission of the voice, i.e., implementing directional transmission of the converted voice corresponding to the second voice information to the target user.

Therefore, by utilizing the directional transmission function of the directional audio device of the electronic equipment, users using different languages can complete the transmission of voice information through the electronic equipment with the directional audio device, so that face-to-face communication is realized, and due to the directional transmission function of the directional audio device, the users do not need to worry about that the voice output by the directional audio device can be heard by other users except the target user, so that the privacy of chat contents is ensured.

Moreover, through the audio processing method provided by the embodiment of the application, the face-to-face communication can be realized by using different languages without wearing special translation earphones, so that a user can sense and listen to other sound information in the current scene in the communication process, and the chat effect is effectively improved.

As shown in fig. 2, in the process of face-to-face communication between user a and user B, user a uses language a, user B uses language B, and since the languages of user a and user B are different, it is necessary to translate the chat content into a language that the opposite party can understand. In practical application, a user A uses a language A to say a first content to a user B, the first content of the language A can be identified through a microphone of electronic equipment, and then the first content is translated into a second content corresponding to the language B; and then outputting the voice of the second content to the position of the user B through a directional audio device of the electronic equipment, so that the user B can hear the voice of the second content, and the face-to-face communication between the user A and the user B is realized.

Correspondingly, in the process that the user B speaks to the user A through the language B, the content of the user B dialect can be converted into the language A in the mode to be transmitted to the user A, and therefore face-to-face communication between the user A and the user B is achieved.

Optionally, the electronic device includes at least two directional audio devices, and the sound emission direction of each directional audio device is different; the controlling the directional audio device of the electronic equipment to output the voice corresponding to the second voice information towards the target direction includes: determining a target directional audio device corresponding to the second language from the at least two directional audio devices; and controlling the target directional audio device to output the voice corresponding to the second voice information towards the target direction.

In this embodiment, when the electronic device includes at least two directional audio devices and the sound emission direction of each directional audio device is different, language information associated with each directional sound emission device may be set so as to control a target directional audio device of the at least two directional audio devices to output a voice towards a target direction, thereby implementing directional transmission of the voice.

For example, the electronic device includes a first directional audio device and a second directional audio device, a sound emission direction of the first directional audio device is parallel to a normal direction of a display screen of the electronic device, and a sound emission direction of the second directional audio device is parallel to a normal direction of a battery cover of the electronic device, that is, the first directional audio device and the second directional audio device are respectively disposed on two opposite back sides of the electronic device, and the sound emission directions are opposite.

For example, as shown in fig. 2, a side of the electronic device facing the user a is provided with a first directional audio device, and a side of the electronic device facing the user B is provided with a second directional audio device. By such placement, a first directional audio device may be provided for outputting directional speech to user a, and a second directional audio device may be provided for outputting directional speech to user B. Meanwhile, the incidence relation between the first directional audio device and the language A of the user A is established, and the incidence relation between the second audio device and the language B of the user B is established.

For a folding screen electronic device comprising a first screen and a second screen, a first directional audio device may be arranged on the first screen, and a second audio device may be arranged on the second screen, so as to implement directional transmission of voice. The working principle is the same as that of the electronic device shown in fig. 2.

When the microphone of the electronic equipment recognizes the first voice information corresponding to the language A, the user A is speaking, so that the recognized first voice information is converted into second voice information corresponding to the language B, and the voice orientation corresponding to the second voice information is transmitted to the user B through the second directional audio device, so that the user B receives the content expressed by the user A. When the microphone of the electronic equipment recognizes the third voice information corresponding to the language B, the fact that the user B speaks is indicated, therefore, the recognized third voice information is converted into fourth voice information corresponding to the language A, the voice direction corresponding to the fourth voice information is transmitted to the user A through the first directional audio device, and the user A receives the content expressed by the user B. Therefore, the user A and the user B can communicate face to face without barriers, and the privacy of the chat content is ensured by utilizing the directional transmission function of the directional audio device.

Optionally, the controlling the directional audio device of the electronic device to output the voice corresponding to the second voice information to the target direction includes: controlling the directional audio device to rotate so that the sounding direction of the directional audio device faces to the target direction; and controlling the directional audio device to output the voice corresponding to the second voice information.

In this embodiment, the directional audio device can rotate at any angle relative to the housing of the electronic device, so that the sound-emitting direction of the directional audio device rotates to any direction, and corresponding voice is output; compared with the method that the user needs to adjust the placing position of the electronic equipment to adjust the sound production direction of the directional audio device, the flexibility of adjusting the sound production direction of the directional audio device can be effectively improved.

Furthermore, in the process of recognizing the first voice information, the direction information of the user outputting the first voice information can be judged according to the recognized first voice information, and the direction information is recorded and stored; when the voice needs to be directionally output to the user, the direction information of the user can be searched, and the sounding direction of the directional audio device of the electronic equipment is rotated to the direction of the user, so that the directional transmission of the voice is realized.

The microphone can be arranged inside the electronic equipment, the sound outlet of the microphone can be arranged on the frame of the electronic equipment, and sound waves can reach the microphone through the air and the sound outlet channel.

Furthermore, in order to improve the directivity performance of the sound of the directional audio device, the ultrasonic wave transmitting capacity can be improved and the ultrasonic wave beam can be more concentrated by arranging the directional audio device array.

As shown in fig. 3, a set of directional audio device arrays may include four directional audio devices 31; it should be noted that the number of directional audio devices of the directional audio device array can be set according to actual situations.

The directional audio device array can realize directional transmission of sound signals, the piezoelectric ceramic transducer can be utilized to drive the screen and the rear cover of the electronic equipment to vibrate to emit ultrasonic waves, and due to the nonlinear mediation effect of the air medium, required audio signals are finally obtained in a specific area, so that the directional transmission of sound is realized.

As shown in fig. 4, the driving circuit diagram of the directional audio device includes a data signal processing module, a power amplifier, and a directional audio device. By utilizing the high directivity of the ultrasonic wave, the screen and the rear cover of the electronic equipment can both send the ultrasonic wave, and the required audio signal is finally obtained due to the nonlinear mediation effect of the air. For example, two ultrasonic waves with frequencies of f1 and f2 are emitted, are affected by air nonlinear interaction, and finally are subjected to multiple sound waves of f1, f2, f1+ f2, f1-f2, 2f1, 2f2 and the like. By reasonably selecting f1 and f2, the difference frequency sound waves f1-f2 in the audible sound frequency domain can be obtained.

Optionally, when the user a and the user B perform face-to-face communication through the electronic device such as the mobile phone, the electronic device such as the mobile phone may be fixed by the bracket, so that the sound emitting direction of the directional audio device of the electronic device is directly opposite to the target user.

Further, the method and the device can also be applied to communication scenes of people with different voices. The following is a description of the communication scenario of users in three different languages.

As shown in fig. 5, the languages of user a, user B, and user C are language a, language B, and language C, respectively, and two mobile phones, i.e., mobile phone 501 and mobile phone 502, may be used. Wherein, the screen of the mobile phone 501 faces the user B, and the back cover faces the user a; the screen of handset 502 faces user C and the back cover faces user a.

The mobile phone 501 is set as follows: the screen surface is set as language B, and the rear cover is set as language A; so set up, the screen face can only send out language B, and the back lid face can only send out language A.

The mobile phone 502 is set as follows: the screen surface is set as language C, and the rear cover is set as language A; so set up, the screen face can only send out language C, and the back lid can only send out language A.

When a user speaks, the microphone of the mobile phone receives the voice signal and carries out language identification on the received voice information.

For the mobile phone 501, when the signal received by the microphone is language a or language C, the mobile phone translates the signal into language B, and then sends an ultrasonic signal through the directional sound generating device on the screen surface, and demodulates an audible sound signal when reaching the ear. When the signal received by the microphone is language B, the mobile phone only translates the signal into language A, and then the directional sounding device on the back cover surface works.

For the mobile phone 502, when the signal received by the microphone is language a or language B, the mobile phone translates it into language C, and then sends an ultrasonic signal through the directional sound generating device on the screen surface, and demodulates an audible sound signal when reaching the ear. When the signal received by the microphone is language C, the mobile phone only translates the signal into language A, and then the directional sounding device on the back cover surface works.

It should be noted that, in a similar manner, a chat scenario with a larger number of users can be realized, and the same effect can be achieved.

In the audio processing method provided by the embodiment of the application, the execution main body may be an audio processing device, or a control module used for executing the loading audio processing method in the audio processing device is changed. In the embodiment of the present application, an audio processing apparatus executes an audio processing method for loading, and the audio processing method provided in the embodiment of the present application is described.

As shown in fig. 6, an embodiment of the present application provides an audio processing apparatus, where the audio processing apparatus 600 includes:

a receiving module 601, configured to receive first voice information identified by a microphone of an electronic device, where the first voice information is voice information corresponding to a first language;

a conversion module 602, configured to convert the first voice information into second voice information corresponding to a second language;

the output module 603 is configured to control a directional audio device of the electronic device to output a voice corresponding to the second voice information to a target location, so that a target user located at the target location listens to the voice corresponding to the second voice information.

Optionally, the electronic device includes at least two directional audio devices, and the sound emission direction of each directional audio device is different;

the output module 603 includes:

a determining unit, configured to determine a target directional audio device corresponding to the second language from the at least two directional audio devices;

and the first output unit is used for controlling the target directional audio device to output the voice corresponding to the second voice information towards the target direction.

Optionally, the audio processing apparatus 600 further includes:

the establishing module is used for pre-establishing a first incidence relation between a first directional audio device and the first language and a second incidence relation between a second directional audio device and the second language in the at least two directional audio devices;

the determining unit is configured to determine, based on the second association relationship, the second directional audio device as a target directional audio device corresponding to the second language.

Optionally, the output module 603 includes:

the rotating unit is used for controlling the directional audio device to rotate so that the sounding direction of the directional audio device faces to the target direction;

and the second output unit is used for controlling the directional audio device to output the voice corresponding to the second voice information.

Optionally, the audio processing apparatus 600 further includes:

the recognition module is used for recognizing the azimuth information of the target object which outputs the first voice information;

and the record storage module is used for recording and storing the azimuth information of the target object.

The audio processing apparatus 600 in the embodiment of the present application may be an apparatus, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.

The audio processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.

The audio processing apparatus 600 provided in this embodiment of the application can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 5, and for avoiding repetition, details are not described here again.

Optionally, an embodiment of the present application further provides an electronic device, which includes a processor, a memory, and a program or an instruction stored in the memory and capable of running on the processor, where the program or the instruction is executed by the processor to implement each process of the above-mentioned audio processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.

It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.

As shown in fig. 7, an embodiment of the present application further provides an electronic device, where the electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.

Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.

The input unit 704 is configured to receive first voice information recognized by a microphone of the electronic device, where the first voice information is voice information corresponding to a first language; a processor 710 for converting the first voice information into second voice information corresponding to a second language; and an audio output unit 703 for controlling a directional audio device of the electronic device to output the voice corresponding to the second voice information to a target direction, so that a target user located at the target direction listens to the voice corresponding to the second voice information.

Optionally, the electronic device includes at least two directional audio devices, and the sound emission direction of each directional audio device is different; a user input unit 707 for determining a target directional audio device corresponding to the second language from the at least two directional audio devices; and an audio output unit 703 configured to control the target directional audio device to output a voice corresponding to the second voice information to a target direction.

Optionally, the processor 710 is configured to pre-establish a first association relationship between a first directional audio device and the first language and a second association relationship between a second directional audio device and the second language in the at least two directional audio devices; and the processor 710 is configured to determine, based on the second association relationship, the second directional audio device as a target directional audio device corresponding to the second language.

Optionally, the processor 710 is configured to control the directional audio device to rotate so that the sound emission direction of the directional audio device faces the target direction; and an audio output unit 703 configured to control the directional audio device to output a voice corresponding to the second voice information.

Optionally, the processor 710 is configured to identify azimuth information of a target object outputting the first speech information; and the processor 710 is used for recording and storing the azimuth information of the target object.

The electronic device 700 is capable of implementing the processes implemented by the electronic device in the foregoing embodiments, and in order to avoid repetition, the details are not described here.

It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.

The electronic device provides wireless broadband internet access to the user via the network module 702, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.

The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the electronic apparatus 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.

The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.

The electronic device 700 also includes at least one sensor 705, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the electronic device 700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.

The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.

The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.

Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.

The interface unit 708 is an interface for connecting an external device to the electronic apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 700 or may be used to transmit data between the electronic apparatus 700 and the external device.

The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.

The processor 710 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby monitoring the whole electronic device. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.

The electronic device 700 may also include a power supply 711 (e.g., a battery) for providing power to the various components, and preferably, the power supply 711 may be logically coupled to the processor 710 via a power management system, such that functions of managing charging, discharging, and power consumption may be performed via the power management system.

In addition, the electronic device 700 includes some functional modules that are not shown, and are not described in detail herein.

The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the audio processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.

The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.

The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned audio processing method embodiment, and can achieve the same technical effect, and is not described here again to avoid repetition.

It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.

While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于驾驶舱通信的相控阵扬声器和麦克风系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!