Method and terminal for realizing stereo output

文档序号:38343 发布日期:2021-09-24 浏览:30次 中文

阅读说明:本技术 实现立体声输出的方法及终端 (Method and terminal for realizing stereo output ) 是由 孙冉 叶千峰 于 2020-01-22 设计创作,主要内容包括:本申请提供了一种实现立体声输出的方法及终端。该终端包括设置于终端四边的左出音孔、右出音孔、上出音孔和下出音孔。当终端用于竖屏播放立体声时,左出音孔和右出音孔用于导出不同侧声道。当终端用于横屏播放立体声时,左出音孔、右出音孔和上出音孔中的至少一个出音孔用于导出一侧声道,下出音孔用于导出另一侧声道。上述技术方案能够使终端在横屏使用和竖屏使用时都能够实现立体声输出,提升用户体验。(The application provides a method and a terminal for realizing stereo output. The terminal comprises a left sound outlet hole, a right sound outlet hole, an upper sound outlet hole and a lower sound outlet hole which are arranged on four sides of the terminal. When the terminal is used for playing stereo on a vertical screen, the left sound outlet hole and the right sound outlet hole are used for leading out different side sound channels. When the terminal is used for playing stereo on a horizontal screen, at least one sound outlet hole of the left sound outlet hole, the right sound outlet hole and the upper sound outlet hole is used for leading out a sound channel on one side, and the lower sound outlet hole is used for leading out a sound channel on the other side. By the technical scheme, stereo output can be realized when the terminal is used in a horizontal screen mode and a vertical screen mode, and user experience is improved.)

1. A terminal, comprising: the left sound outlet hole, the right sound outlet hole, the upper sound outlet hole and the lower sound outlet hole are formed in the four sides of the terminal; wherein the content of the first and second substances,

when the terminal is used for playing stereo by a vertical screen, the left sound outlet hole and the right sound outlet hole are used for leading out different side sound channels;

when the terminal is used for playing stereo on a horizontal screen, at least one of the left sound outlet hole, the right sound outlet hole and the upper sound outlet hole is used for leading out a sound channel on one side, and the lower sound outlet hole is used for leading out a sound channel on the other side.

2. The terminal of claim 1, wherein the lower sound outlet is configured to derive one of a bass channel or a center channel and/or the upper sound outlet is configured to derive a mixed channel obtained by aggregating the different side channels when the terminal is configured for portrait playback of stereo.

3. The terminal according to claim 1 or 2, wherein the upper sound outlet hole is used to derive mono speech when the terminal is used for talking and the terminal is in earpiece mode; and/or the presence of a gas in the gas,

when the terminal is used for communication and the terminal is in an outward playing mode, at least one sound outlet hole of the left sound outlet hole, the right sound outlet hole, the upper sound outlet hole and the lower sound outlet hole is used for leading out single-channel voice.

4. The terminal according to any one of claims 1 to 3, wherein the left sound emitting hole and the right sound emitting hole are respectively provided on two sides of the terminal extending in a length direction, and the upper sound emitting hole and the lower sound emitting hole are respectively provided on two sides of the terminal extending in a width direction.

5. The terminal according to any of claims 1 to 4, wherein a distance from the left sound outlet hole to a side where the upper sound outlet hole is located is smaller than a distance from the left sound outlet hole to a side where the lower sound outlet hole is located; and/or the presence of a gas in the gas,

the distance from the right sound outlet hole to the edge where the upper sound outlet hole is located is smaller than the distance from the right sound outlet hole to the edge where the lower sound outlet hole is located.

6. The terminal according to any of claims 1 to 5, characterized in that the terminal further comprises a first sound emitting unit, a second sound emitting unit and a third sound emitting unit; wherein the content of the first and second substances,

the first sound outlet unit is respectively communicated with the left sound outlet hole and the upper sound outlet hole;

the second sound emitting unit is respectively communicated with the right sound emitting hole and the upper sound emitting hole;

the third sound emitting unit is communicated with the lower sound emitting hole.

7. The terminal of claim 6, wherein when the terminal is used for playing stereo on a vertical screen, the first sound emitting unit and the second sound emitting unit are used for playing different side channels and respectively emit sound through the left sound emitting hole and the right sound emitting hole.

8. The terminal according to claim 6 or 7, wherein when the terminal is used for playing stereo on a vertical screen, the third sound emitting unit is used for playing a bass channel or a center channel and emits sound through the lower sound emitting hole; or, the third sound output unit does not play any sound channel.

9. A terminal according to any of claims 6 to 8, wherein when the terminal is used for playing stereo on a landscape screen, the first and/or second sound output units are used for playing one side channel and the third sound output unit is used for playing the other side channel.

10. The terminal of claim 9, wherein the first sound emitting unit is configured to play the side channel and emit sound through at least one of the upper sound emitting hole and the left sound emitting hole; and/or

The second sound emitting unit is used for playing the sound channel on one side and emitting sound through at least one of the upper sound emitting hole and the right sound emitting hole.

11. The terminal according to any of claims 6 to 10, wherein when the terminal is used for a call and the terminal is in an earpiece mode, the first and/or second sound emitting units are used for playing monophonic voices; and/or

When the terminal is used for conversation and the terminal is in a play-out mode, at least one of the first sound emitting unit, the second sound emitting unit and the third sound emitting unit is used for playing single-channel voice.

12. The terminal device according to any one of claims 6 to 11, wherein the first sound emitting unit and the second sound emitting unit are provided at one end of the terminal in a longitudinal direction, and the third sound emitting unit is provided at the other end of the terminal in the longitudinal direction.

13. The terminal according to any one of claims 6 to 12, wherein a sound guide channel between the first sound emitting unit and the upper sound emitting hole and/or a sound guide channel between the second sound emitting unit and the upper sound emitting hole is kept in a normally open state.

14. The terminal according to any of claims 1 to 13, characterized in that the terminal further comprises:

and the control component is used for controlling the sound production state of at least one of the left sound outlet hole, the right sound outlet hole, the upper sound outlet hole and the lower sound outlet hole.

15. The terminal according to any of claims 1 to 14, characterized in that the terminal further comprises:

the processing unit is used for determining sound guide modes of the left sound outlet hole, the right sound outlet hole, the upper sound outlet hole and the lower sound outlet hole according to the use scene of the terminal;

wherein the usage scenario of the terminal includes at least one of the following: the horizontal screen plays a stereo scene, the vertical screen plays a stereo scene, a receiver call scene and an external call scene.

16. A method for realizing stereo output is characterized in that the method is applied to a terminal, the terminal comprises a left sound outlet hole, a right sound outlet hole, an upper sound outlet hole and a lower sound outlet hole which are arranged on four sides of the terminal, and the method comprises the following steps:

when the terminal is used for playing stereo by a vertical screen, controlling the left sound outlet hole and the right sound outlet hole to lead out different side sound channels;

when the terminal is used for playing stereo on a horizontal screen, at least one of the left sound outlet hole, the right sound outlet hole and the upper sound outlet hole is controlled to lead out a sound channel on one side, and the lower sound outlet hole is controlled to lead out a sound channel on the other side.

17. The method of claim 16, further comprising:

when the terminal is used for playing stereo on a vertical screen, controlling the lower sound outlet hole to lead out a bass sound channel or a neutral sound channel; and/or

And when the terminal is used for playing stereo by a vertical screen, controlling the upper sound outlet hole to lead out the mixed sound channel formed by aggregating the sound channels on different sides.

18. The terminal according to claim 16 or 17, further comprising:

when the terminal is used for calling and the terminal is in a receiver mode, controlling the upper sound outlet hole to lead out single-channel voice; and/or the presence of a gas in the gas,

when the terminal is used for communication and the terminal is in an outward playing mode, at least one sound outlet hole of the left sound outlet hole, the right sound outlet hole, the upper sound outlet hole and the lower sound outlet hole is controlled to lead out single-channel voice.

19. The method according to any of claims 16 to 18, wherein the terminal further comprises a first sound emitting unit, a second sound emitting unit and a third sound emitting unit; wherein the content of the first and second substances,

the first sound outlet unit is respectively communicated with the left sound outlet hole and the upper sound outlet hole;

the second sound emitting unit is respectively communicated with the right sound emitting hole and the upper sound emitting hole;

the third sound emitting unit is communicated with the lower sound emitting hole.

20. The method of any one of claims 16 to 19, further comprising:

determining sound guide modes of the left sound outlet hole, the right sound outlet hole, the upper sound outlet hole and the lower sound outlet hole according to the use scene of the terminal;

wherein the usage scenario of the terminal includes at least one of the following: the horizontal screen plays a stereo scene, the vertical screen plays a stereo scene, a receiver call scene and an external call scene.

Technical Field

The present application relates to the technical field of stereo sound implementation, and more particularly, to a method and a terminal for implementing stereo sound output.

Background

With the continuous development of terminal technology, electronic devices such as mobile phones are becoming the main platform for streaming media and enjoying contents. In order to match the visual experience of clearer, brighter and larger screen, more and more mobile terminals, such as mobile phones, etc., add stereo output function to the mobile terminals to provide better listening experience to users.

The existing terminal mostly adopts a mode that a receiver is arranged at the top and a loudspeaker is arranged at the bottom to play audio, and the up-down type loudspeaker mode can only realize stereo playing in a landscape screen playing scene. In the vertical screen playing scene, the left and right sound channels are not distinguished by the receiver and the loudspeaker, so that the played sound content is the same, and stereo playing cannot be realized.

At present, more and more users use short videos and live broadcast applications, the short video platforms and the live broadcast platforms adopt a vertical screen format to increase traffic flow and immersion, accordingly, the number of vertical screen playing scenes is more and more, and the users have the requirement of improving tone quality under the vertical screen playing scene.

Disclosure of Invention

The application provides a method and a terminal for realizing stereo output, which can enable terminal equipment to realize stereo output when used on a horizontal screen and a vertical screen, and improve user experience.

In a first aspect, a terminal is provided, which includes: the first sound output unit, the second sound output unit and the third sound output unit are arranged on the terminal, and the left sound output hole, the right sound output hole, the upper sound output hole and the lower sound output hole are arranged on the four sides of the terminal; the first sound outlet unit is respectively communicated with the left sound outlet hole and the upper sound outlet hole; the second sound emitting unit is respectively communicated with the right sound emitting hole and the upper sound emitting hole; the third sound emitting unit is communicated with the lower sound emitting hole; the terminal also comprises a processing unit; when the terminal is used for playing stereo in a vertical screen mode, the processing unit controls the first sound emitting unit and the second sound emitting unit to play different side sound channels and sound from the left sound emitting hole and the right sound emitting hole; when the terminal is used for playing stereo on a horizontal screen, the processing unit controls the first sound emitting unit and/or the second sound emitting unit to play a sound channel on one side and emits sound from at least one sound emitting hole of the left sound emitting hole, the right sound emitting hole and the upper sound emitting hole, and the processing unit controls the third sound emitting unit to play a sound channel on the other side.

The terminal that this application embodiment provided includes three sound unit, and first sound unit/second sound unit communicate respectively to the play sound hole of two directions in the terminal, through the play mode of controlling sound unit and the vocal position of going out the sound hole, can both realize stereo output when terminal horizontal screen uses and erects the screen and use.

Compared with the existing terminal, the terminal provided by the embodiment of the application is only additionally provided with the sound outlet unit, the problem that the terminal does not have stereo sound when being played in a vertical screen mode can be solved on the premise that the space is utilized as high as possible, and the user experience is improved.

It should be understood that, in the embodiment of the present application, the left sound emitting hole is located on the left side of the terminal, the right sound emitting hole is located on the right side of the terminal, the upper sound emitting hole is located on the upper portion of the terminal, and the lower sound emitting hole is located on the lower portion of the terminal.

Optionally, when the terminal is used for playing stereo on a vertical screen, the processing unit controls the first sound output unit and the second sound output unit to play a left channel and a right channel respectively.

Optionally, when the terminal is used for playing stereo on a landscape screen, the processing unit controls the first sound output unit and/or the second sound output unit to play one side channel of the left channel/the right channel, and controls the third sound output unit to play the other side channel of the left channel/the right channel.

With reference to the first aspect, in a possible implementation manner, when the terminal is used for playing stereo on a vertical screen, the processing unit controls the first sound emitting unit and the second sound emitting unit to play different side sound channels, where the first sound emitting unit only emits sound from the left sound emitting hole, and the second sound emitting unit only emits sound from the right sound emitting hole.

When the terminal is used for erecting the screen and broadcasting stereophonic sound, first sound unit only goes out the sound production from left sound hole, and the second sound unit only goes out the sound production from right sound hole, and also first sound unit and second sound unit are all not from going out the sound production from the top, can avoid different side sound channels to go out sound production crosstalk at the top.

With reference to the first aspect, in a possible implementation manner, when the terminal device is configured to play a stereo on a landscape screen, the processing unit controls the first sound emitting unit and/or the second sound emitting unit to play a sound channel on one side, and only emits sound from the upper sound emitting hole, and the processing unit controls the third sound emitting unit to play a sound channel on the other side.

When the terminal equipment is used for playing stereo on a horizontal screen, the sound played by the first sound output unit and/or the second sound output unit is only sounded through the upper sound output hole, the sound played by the third sound output unit is sounded through the lower sound output hole, and the stereo effect is better.

With reference to the first aspect, in a possible implementation manner, when the terminal is used for playing stereo on a vertical screen, the processing unit further controls the third sound output unit to play a bass channel.

The third sound output unit plays a bass sound channel, so that the bass effect can be enhanced, and the tone quality is improved, so that the user experience is better.

With reference to the first aspect, in a possible implementation manner, when the terminal is used for a call, the processing unit controls a playing manner of the sound output unit according to a call mode.

With reference to the first aspect, in a possible implementation manner, the call mode includes an earpiece mode and a play-out mode, and when the terminal is in the earpiece mode, the processing unit controls the first sound emitting unit and/or the second sound emitting unit to play a monaural voice, and only emits a sound from the upper sound emitting hole; when the terminal is in a play-out mode, the processing unit controls at least one of the first sound emitting unit, the second sound emitting unit and the third sound emitting unit to play a single-channel voice.

When the terminal is used for a call in an earphone mode, the first sound output unit and/or the second sound output unit plays monophonic voice and only produces sound from the upper sound output hole, and the first sound output unit and/or the second sound output unit take the functions of an earphone on the existing terminal into consideration, so that the privacy of call contents can be ensured.

When the terminal is used for the communication in the play-out mode, the first sound output unit, the second sound output unit and the third sound output unit can be provided with a plurality of sound output units for playing voice, so that when a certain sound output unit is shielded by a shielding object, other sound output units can also generate sound.

Optionally, the sound emitting unit for emitting sound in the play mode can be customized or selected by a user, so that when a certain sound emitting unit is damaged, sound can be emitted through other sound emitting units.

With reference to the first aspect, in a possible implementation manner, the terminal further includes a control component, where the control component is configured to control an opening and closing state of a sound guide channel between the sound output unit and the sound output hole.

With reference to the first aspect, in a possible implementation manner, the control component is disposed on all sound guide channels where the first sound output unit and the second sound output unit communicate with the outside of the terminal.

The first sound output unit and the second sound output unit are communicated with all the sound guide channels outside the terminal, and the control components are arranged on the sound guide channels, so that the opening and closing state of each sound guide channel can be controlled independently, and the possibility of sound output path selection is increased.

With reference to the first aspect, in a possible implementation manner, the control component is disposed on the sound guide channel between the first sound emitting unit and the left sound emitting hole, and the sound guide channel between the second sound emitting unit and the right sound emitting hole.

The control component can be only arranged on part of the sound guide channel, and the first sound outlet unit and the second sound outlet unit are in a normally open state with the sound guide channel of the upper sound outlet.

With reference to the first aspect, in a possible implementation manner, the control assembly includes a moving part and a fixed part, the moving part is provided with a moving part through hole, the fixed part is provided with a fixed part through hole, the processing unit inputs a control signal to the control assembly to control the moving part to move relative to the fixed part, and correspondingly controls the moving part through hole to be communicated with or staggered from the fixed part through hole.

With reference to the first aspect, in a possible implementation manner, the control assembly further includes a coil, the moving member is a magnet, and the coil receives an electrical signal input by the processing unit.

With reference to the first aspect, in a possible implementation manner, the terminal further includes a detection unit, where the detection unit is configured to acquire the use state information of the terminal and output the acquired use state information to the processing unit; the processing unit is used for determining the use scene of the terminal according to the use state information, wherein the use scene comprises a horizontal/vertical stereo play scene and a call scene.

Optionally, the detection unit includes a sensor, such as a gyroscope, an acceleration sensor, an angular velocity sensor, a touch sensor, and the like.

With reference to the first aspect, in a possible implementation manner, the first sound emitting unit and the second sound emitting unit are disposed at an upper portion of the terminal, and the third sound emitting unit is disposed at a lower portion of the terminal.

The first sound emitting unit and the second sound emitting unit are arranged on the upper portion of the terminal, so that a region which is often held by a user can be avoided, and a sound emitting hole is prevented from being blocked by the hand of the user.

With reference to the first aspect, in a possible implementation manner, the first sound emitting unit and the second sound emitting unit are respectively disposed at left and right sides of the terminal, or the first sound emitting unit and the second sound emitting unit are disposed in the middle of the terminal.

The first sound output unit and the second sound output unit are respectively arranged on the left side and the right side of the terminal, so that the length from the sound output units to the sound guide channels of the sound output holes can be reduced.

With reference to the first aspect, in a possible implementation manner, the first sound emitting unit, the second sound emitting unit, and the third sound emitting unit are all speakers, or the first sound emitting unit and the second sound emitting unit are earpieces, and the third sound emitting unit is a speaker.

In a second aspect, a method for realizing stereo output is provided, which is applied to a terminal, where the terminal includes a first sound output unit, a second sound output unit, a third sound output unit, and a left sound output hole, a right sound output hole, an upper sound output hole, and a lower sound output hole, which are disposed on four sides of the terminal; the first sound outlet unit is respectively communicated with the left sound outlet hole and the upper sound outlet hole; the second sound emitting unit is respectively communicated with the right sound emitting hole and the upper sound emitting hole; the third sound emitting unit is communicated with the lower sound emitting hole; the method comprises the following steps: when the terminal is used for playing stereo in a vertical screen mode, the first sound output unit and the second sound output unit are controlled to play different side sound channels, and sound is generated from the left sound output hole and the right sound output hole; when the terminal is used for playing stereo on a horizontal screen, the first sound output unit and/or the second sound output unit is controlled to play a sound channel on one side, at least one sound output hole of the left sound output hole, the right sound output hole and the upper sound output hole is controlled to generate sound, and the processing unit controls the third sound output unit to play a sound channel on the other side.

The method for realizing stereo output is applied to a terminal, the terminal comprises three sound output units, and the first sound output unit/the second sound output unit are respectively communicated to the sound output holes in two directions of the terminal.

With reference to the second aspect, in a possible implementation manner, when the terminal is used for playing stereo on a vertical screen, the first sound output unit and the second sound output unit are controlled to play different side sound channels, where the first sound output unit only produces sound from the left sound output hole, and the second sound output unit only produces sound from the right sound output hole.

With reference to the second aspect, in a possible implementation manner, when the terminal is used for playing stereo on a portrait screen, the method further includes: and controlling the third sound output unit to play a bass sound channel.

With reference to the second aspect, in a possible implementation manner, the method further includes: and when the terminal is used for communication, controlling the play mode of the voice-out unit according to the communication mode.

With reference to the second aspect, in a possible implementation manner, the call mode includes an earpiece mode and a play-out mode, and when the terminal is in the earpiece mode, the first sound emitting unit and/or the second sound emitting unit is controlled to play a monaural voice, and only the upper sound emitting hole emits a sound; and when the terminal is in a play-out mode, controlling at least one of the first sound emitting unit, the second sound emitting unit and the third sound emitting unit to play a single-channel voice.

With reference to the second aspect, in a possible implementation manner, the terminal further includes a control component, where the control component is configured to control an opening and closing state of the sound guide channel between the sound output unit and the sound output hole.

With reference to the second aspect, in a possible implementation manner, the control component is disposed on all sound guide channels where the first sound output unit and the second sound output unit communicate with the outside of the terminal.

With reference to the second aspect, in a possible implementation manner, the control component is disposed on the sound guide channel between the first sound emitting unit and the left sound emitting hole, and the sound guide channel between the second sound emitting unit and the right sound emitting hole.

With reference to the second aspect, in a possible implementation manner, the control assembly includes a moving part and a fixed part, the moving part is provided with a moving part through hole, the fixed part is provided with a fixed part through hole, the control assembly is configured to receive a control signal and is configured to control the moving part to move relative to the fixed part, and the moving part through hole and the fixed part through hole are correspondingly controlled to be communicated or staggered.

With reference to the second aspect, in a possible implementation manner, the control assembly further includes a coil, the moving member is a magnet, and the control assembly further includes: an electrical signal is input to the coil.

With reference to the second aspect, in a possible implementation manner, the method further includes: acquiring the use state information of the terminal; and determining the use scene of the terminal according to the use state information, wherein the use scene comprises a horizontal/vertical stereo playing scene and a call scene.

With reference to the second aspect, in a possible implementation manner, the first sound emitting unit and the second sound emitting unit are disposed at an upper portion of the terminal, and the third sound emitting unit is disposed at a lower portion of the terminal.

With reference to the second aspect, in a possible implementation manner, the first sound emitting unit and the second sound emitting unit are respectively disposed at left and right sides of the terminal, or the first sound emitting unit and the second sound emitting unit are disposed in the middle of the terminal.

With reference to the second aspect, in a possible implementation manner, the first sound emitting unit, the second sound emitting unit, and the third sound emitting unit are all speakers, or the first sound emitting unit and the second sound emitting unit are earpieces, and the third sound emitting unit is a speaker.

In a third aspect, a terminal is provided, including: the sound emitting units are distributed on four sides of the terminal, and the sound emitting units are in one-to-one correspondence with the sound emitting holes; the terminal also comprises a processing unit; when the terminal is used for playing stereo on a vertical screen, the processing unit controls sound output units corresponding to the sound output holes on the left side and the right side of the terminal to play a left sound channel and a right sound channel respectively, and controls sound output units corresponding to the sound output holes on the upper end and the lower end of the terminal to play other sound channels; when the terminal is used for playing stereo on a horizontal screen, the processing unit controls the sound output units corresponding to the sound output holes at the upper end and the lower end of the terminal to respectively play a left sound channel and a right sound channel, and controls the sound output units corresponding to the sound output holes at the left side and the right side of the terminal to play other sound channels.

The terminal in the embodiment of the application comprises a plurality of sound output units and a plurality of sound output holes, each sound output unit can work independently, the stereo audio playing experience of multiple channels can be provided, the audio and video playing immersion feeling is enhanced, and the user experience is improved.

With reference to the third aspect, in a possible implementation manner, when the terminal is used for playing stereo on a vertical screen, the processing unit controls the sound output units corresponding to the sound output holes at the upper and lower ends of the terminal to play a bass channel, or controls the sound output units corresponding to the sound output holes at the upper and lower ends of the terminal to play an upper channel and a lower channel, respectively.

With reference to the third aspect, in a possible implementation manner, when the terminal is used for playing stereo on a landscape screen, the processing unit controls the sound output units corresponding to the sound output holes on the left and right sides of the terminal to play an upper sound channel and a lower sound channel, respectively, or controls the sound output units corresponding to the sound output holes on the left and right sides of the terminal to play a bass sound channel.

With reference to the third aspect, in a possible implementation manner, the terminal further includes a detection unit, where the detection unit is configured to acquire the use state information of the terminal and output the acquired use state information to the processing unit; and the processing unit is used for determining the playing mode of the sound unit according to the using state information.

With reference to the third aspect, in a possible implementation manner, the number of the sound output units is four.

In a fourth aspect, a computer-readable storage medium is provided, comprising instructions that, when executed on an electronic device, cause the processor to execute the executable code to implement the method provided in any of the second aspects.

In a fifth aspect, a computer program product containing instructions is provided, which, when run on an electronic device, causes the electronic device to implement the method provided in any one of the second aspects.

Drawings

Fig. 1 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;

fig. 2 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application;

FIG. 3 is a schematic block diagram of a prior art electronic device;

FIG. 4 is a schematic exploded view of an existing electronic device;

fig. 5 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;

FIG. 6 is a schematic diagram of the sound emission unit in the vertical screen audio/video playing scene;

FIG. 7 is a diagram illustrating the sound emission of the sound emitting unit in a landscape audio/video playing scenario;

FIG. 8 is a diagram illustrating the sounding of the sounding unit in a vertical screen call scenario;

FIG. 9 is a schematic diagram of a control assembly provided by an embodiment of the present application;

FIG. 10 is a schematic block diagram of a control assembly provided by an embodiment of the present application;

fig. 11 is a schematic structural diagram of another terminal device provided in an embodiment of the present application;

fig. 12 is a schematic cross-sectional view of a sound emitting unit provided in an embodiment of the present application;

fig. 13 is a schematic diagram of an operating logic for controlling play of an outbound unit according to an embodiment of the present application;

fig. 14 is a schematic structural diagram of another terminal device provided in an embodiment of the present application;

fig. 15 is a schematic diagram of an operating logic for controlling play of an outbound unit according to an embodiment of the present application;

fig. 16 is a schematic block diagram of a terminal device provided in an embodiment of the present application;

fig. 17 is a block diagram of audio-related hardware provided in an embodiment of the present application;

fig. 18 is a block diagram of audio-related software provided in an embodiment of the present application.

Reference numerals:

210-a housing; 211-middle frame; 212-rear shell; 220-a display screen; 230-a handset; 240-a speaker; 310-a first sound output unit; 311-left sound outlet; 312-left leading tone channel; 313-a first upguide channel; 320-a second sound output unit; 321-right sound outlet; 322-right leading tone channel; 323-second upguide channel; 330-third sound output unit; 331-lower sound outlet; 332-lower leading tone channel; 341-go up sound outlet; 342-an up-lead channel; 350-a control component; 351-a signal receiving module; 352-a movable member; 3521-the mover body; 3522-a movable through hole; 353-a fixing piece; 3531-a fastener body; 3532-fixed through holes; 301-a magnet; 302-a voice coil; 303-a diaphragm; 304-diaphragm edge ring; 305-a magnetic bowl; 306-a basin stand; 307-a housing; 309 a rear sound cavity; 610-a detection unit; 620-a processing unit; 630-channel open/close control unit; 640-a tone-out unit.

Detailed Description

The technical solution in the present application will be described below with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments.

It should be noted that in the description of the embodiments of the present application, "/" indicates an OR meaning unless otherwise stated, for example, A/B may indicate A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.

In the following, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature.

Furthermore, in the present application, directional terms such as "upper", "lower", "left", "right", "horizontal", "vertical", and the like are defined with respect to a schematically placed orientation or position of a component in the drawings, and it is to be understood that these directional terms are relative concepts that are used for descriptive purposes and clarity and not intended to indicate or imply that a particular orientation of a referenced device or component must be present or that the device or component is constructed and operated in a particular orientation, which may vary accordingly with the orientation in which the component is placed in the drawings, and therefore should not be considered as limiting the present application.

It should be noted that the same reference numerals are used to designate the same components or the same parts in the embodiments of the present application.

For convenience of understanding, technical terms referred to in the embodiments of the present application are explained and described below.

Stereo (stereo), refers to sound having a stereoscopic impression. The sound source has a definite spatial position, the sound has a definite directional source, and the human hearing has the ability to distinguish the orientation of the sound source. Especially, when a plurality of sound sources sound simultaneously, people can sense the position distribution condition of each sound source in space by hearing. When a person directly hears sounds in these stereo spaces, they can feel their orientation and gradation in addition to the loudness, pitch, and tone of the sounds. The sound having spatial distribution characteristics such as azimuth gradation directly heard by a person is called stereo sound in nature.

The sounds emitted from nature are stereophonic sounds, but if the stereophonic sounds are recorded, amplified and the like, and then reproduced, all the sounds are emitted from a loudspeaker, and the reproduced sound (compared with the original sound source) is not stereophonic, and the reproduced sound is called as monaural sound. At this time, since various sounds are emitted from the same speaker, the original spatial feeling (particularly, spatial distribution feeling of the sound group) is lost. If the entire system can restore the originally occurring spatial impression to some extent from recording to reproduction, such reproduced sound having a spatial distribution characteristic of a certain degree of azimuth hierarchy level or the like is called stereo sound in sound technology, and "stereo sound" in the embodiment of the present application refers to sound having a stereoscopic effect reproduced by such a sound reproduction system.

The reason why a person can distinguish the direction of sound is that the person has two ears, and the direction of a sound source is determined by the ears of the person through the path difference, the time difference, the intensity difference, the frequency difference and the like of sound reaching the two ears. If the spatial positions of different sound sources can be reflected during recording, then the sound sources are replayed through at least two independent sound channels (two loudspeakers), so that when a person listens to the recording, the person just like if the person is personally on the scene to directly hear all aspects of sound source pronunciation.

The sound channel (sound channel) refers to the audio signals that are collected or played back at different spatial positions when the sound is recorded or played, so the number of sound channels is the number of sound sources when the sound is recorded or the number of corresponding speakers when the sound is played back. Generally, voice only uses one sound channel, and is mainly used for communication such as telephone calling, man-machine voice interaction and the like. For audio in music or video, it can be either mono (mono), binaural (i.e. left and right channels), or multichannel (more than two channels).

Stereo consists of two or more channels, the output of which is stereo, also called surround sound. Taking the example of two-channel (left and right) reproduction of stereo sound, when reproducing sound, two loudspeakers at an angle to each other are placed in space, each loudspeaker being supplied with a signal from a single channel. The signals of each channel are processed at the time of recording, for example, the left channel, i.e., the sound output produced by the electronic device simulating the auditory range of the left ear of a human, and the right channel, i.e., the sound output produced by the electronic device simulating the auditory range of the right ear of a human. The left sound channel and the right sound channel are respectively emitted through the left loudspeaker and the right loudspeaker, the left loudspeaker plays the content of the left sound channel on the same side of the left ear of a person, the right loudspeaker plays the content of the right sound channel on the same side of the right ear of the person, and the stereo sound changing effect of left to right or right to left and the like can be generated. In a multi-channel sound field mode of 3.1, 4.1, 5.1, 6.1, 7.1, etc., the left channel and the right channel can also be divided into a front left channel, a middle left channel, a rear left channel, a surround left channel, etc., wherein the "0.1" channel is a subwoofer channel.

The electronic devices referred to in embodiments of the present application may include handheld devices, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to a wireless modem. The portable electronic device may further include a cellular phone (cellular phone), a smart phone (smart phone), a Personal Digital Assistant (PDA) computer, a tablet computer, a portable computer, a laptop computer (laptop computer), a smart watch (smart watch), a smart bracelet (smart bracelet), and a vehicle-mounted computer. The embodiment of the present application does not specifically limit the specific form of the electronic device, and in some embodiments, the electronic device in the embodiment of the present application may be a terminal or a terminal device.

Fig. 1 shows a schematic structural diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.

It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.

Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.

The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.

A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.

In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.

The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.

The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.

The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.

The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.

MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.

The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.

The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.

It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.

The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.

The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.

The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.

The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.

The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.

The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.

The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.

In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).

The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.

The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.

The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.

The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.

The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.

The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.

Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.

The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.

The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.

The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.

The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.

The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.

The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic device 100 may listen to music, audio in voice or video, or listen to a hands-free call, etc. through the speaker 170A.

The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person. The speaker 170A and the receiver 170B may be collectively referred to as an "output unit".

The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.

The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.

The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.

The gyro sensor 180B is also called an angular velocity sensor, and may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory game scenes, e.g., the gyroscope may be able to completely monitor the displacement of the player's hand, thereby achieving various game operational effects, such as changing a horizontal screen to a vertical screen, racing a car game, etc.

The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.

The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.

The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.

A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.

The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.

The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.

The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.

The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.

The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.

The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.

The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.

The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.

Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.

The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.

The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.

Fig. 2 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, which are an application layer, an application framework layer, a system runtime library layer (including Android runtime (Android runtime) and system library), and a kernel layer from top to bottom.

The application layer (applications) may include a series of application packages. As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. The application program is mainly in the aspect of user interface (user interface), and is usually written by calling an interface of an application program framework layer by using a JAVA language.

An application framework layer (application framework) provides an Application Programming Interface (API) and a programming framework for applications of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.

A window manager (window manager) is used to manage the window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.

Content providers (content providers) are used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.

A view system (view system) includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.

A telephone manager (telephony manager) is used to provide the communication functionality of the electronic device 100. Such as management of call status (including on, off, etc.).

A resource manager (resource manager) provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to an application.

The notification manager (notification manager) allows the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.

The system runtime library layer (libraries) can be divided into two parts, namely a system library and an Android runtime.

An Android runtime (Android runtime), namely an Android runtime environment, comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.

The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.

The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.

The system library is a support of an application framework, is an important link for connecting an application framework layer and a kernel layer, and can comprise a plurality of functional modules. For example: surface managers (surface managers), media libraries (media libraries), three-dimensional graphics processing libraries (e.g., open graphics for embedded systems (OpenGL ES)), 2D graphics engines (e.g., skea databases (SGL)), and the like.

The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.

The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.

The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.

The 2D graphics engine is a drawing engine for 2D drawing.

The kernel space (kernel space) is a layer between hardware and software for providing essential functions of the operating system such as file management, memory management, process management, network protocol stack, and the like. The inner core layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver, a Bluetooth driver and the like.

For convenience of understanding, the embodiments of the present application will be described taking an electronic device having the structure shown in fig. 1 and 2 as an example. Fig. 1 is only an example of an electronic device, and the present application may be applied to smart devices such as a mobile phone and a tablet computer, which is not limited in the present application. In the description of the embodiments of the present application, a mobile phone will be taken as an example for introduction.

With the continuous development of terminal technology, electronic devices such as mobile phones are becoming the main platform for streaming media and enjoying contents. In order to match the visual experience of clearer, brighter and larger screen, more and more mobile terminals, such as mobile phones, etc., add stereo output function to the mobile terminals to provide better listening experience to users.

Fig. 3 shows a schematic block diagram of a conventional electronic apparatus. As shown in fig. 3, the electronic device 200 includes a housing 210, a Display Panel (DP) 220, an earpiece (RCV) 230, a Speaker (SPK) 240, and the like.

The housing 210 is formed with an accommodating space for arranging various components of the electronic apparatus 200, and the housing 210 may also function to protect the electronic apparatus 200. The display 220, the earpiece 230, and the speaker 240 are disposed in the receiving space of the housing 210 and connected to the housing 210. The housing 210 may be made of metal, plastic, ceramic, or glass.

The display screen 220 is an example of the display screen 194 shown in fig. 1. The display screen 220 may be a Liquid Crystal Display (LCD) screen, an Organic Light Emitting Diode (OLED) display screen, and the like, wherein the OLED display screen may be a flexible display screen or a hard display screen. The display screen 220 may be a regular screen, or may be a special-shaped screen, a folding screen, etc., for example, the display screen 220 may rotate and fold freely relatively to form an arc, a sphere, a cylinder, etc. The display screen 220 may be disposed on the front surface of the electronic device 200, may be disposed on the back surface of the electronic device 200, and may be disposed on both the front surface and the back surface of the electronic device 200. The front side of the electronic device 200 may be understood as the side facing the user when the user uses the electronic device 200, and the back side of the electronic device 200 may be understood as the side facing away from the user when the user uses the electronic device 200.

The display screen 220 has a certain aspect ratio, for example, the ratio of length (length) L to width (width) W is 4: 3. 16: 9. 16: 10. 18.5: 9. 20: 9. 21: 9 or 15: 9, etc. The display screen 220 may display a video frame or a user interaction interface by using a horizontal screen or a vertical screen. Taking the user in a standing posture as an example, the width direction of the display screen 220 is substantially the same as the line direction between the eyes (or ears) of the user when the vertical screen displays, and the length direction of the display screen 220 is substantially the same as the line direction between the eyes (or ears) of the user when the horizontal screen displays. Generally, the shape of the display 220 is similar to the shape of the entire electronic device 200, and is, for example, rectangular, the longitudinal direction of the display 220 is the same as the longitudinal direction of the electronic device 200, and the width direction of the display 220 is the same as the width direction of the electronic device 200. Therefore, in some embodiments, when the electronic device 200 is used vertically, the length direction of the electronic device 200 is the up-down direction of the paper when viewed in fig. 3, the display screen 220 is a vertical screen display, and the top of the electronic device 200 faces upward and the bottom faces downward; when the electronic device 20 is used in a landscape orientation, i.e., the length direction of the electronic device 200 is the left-right direction of the paper, the display screen 220 is displayed in a landscape orientation, and the top of the electronic device 200 faces left and the bottom faces right, or the top faces right and the bottom faces left. The horizontal and vertical screen display modes of the display screen 220 can be automatically adjusted according to the horizontal and vertical using postures and the placing directions of the electronic device 200, so that the content displayed on the display screen 220 can be conveniently watched or operated by a user no matter how the user uses the electronic device 200. When the display screen 220 is changed from a vertical screen to a horizontal screen, the electronic device 200 may rotate left or right, or when the electronic device 200 switches between two horizontal screen postures, the display screen 220 may also directly rotate 180 °, so that the picture of the display screen 220 may be adjusted to face upward all the time according to the movement posture of the electronic device 200.

The audiovisual content displayed on the display screen 220 has a landscape format and a portrait format, wherein the audiovisual content in the landscape format is mainly suitable for landscape display, and the picture size thereof can be adapted to the landscape ratio of the display screen 220, and the audiovisual content in the portrait format is mainly suitable for portrait display, and the picture size thereof can be adapted to the portrait ratio of the display screen 220. If the video in the vertical screen format is displayed alone in the horizontal screen format or the video in the horizontal screen format is displayed alone in the vertical screen format, a black phenomenon may occur on the screen. However, in a split-screen multitasking scene, a user can watch videos in a small window at the upper side in a vertical screen state and chat in a small window at the lower side, and no black edge exists under the condition that a horizontal screen video is played in a vertical screen.

The earpiece 230 is an example of the receiver 170B shown in fig. 1. The earpiece 230 is a device that can convert an audio electric signal into a sound signal without sound leakage, and is generally used to receive a call or a voice message, etc. When using the handset 230 to play sound, typically only the user himself can hear the sound, which may also be said to be the terminal device in handset mode. The earpiece 230 is generally disposed on an upper portion, e.g., the front top, of the electronic device 200, and is positioned adjacent to a human ear during use. For convenience of understanding, in the embodiments of the present application, both ends of the electronic apparatus in the length direction are referred to as upper (top) and lower (bottom), both ends in the width direction are referred to as left and right, and the surface in the thickness direction is a side surface.

The speaker 240 is an example of the speaker 170A shown in fig. 1. The speaker 240 is a transducer device that converts an electrical signal into an acoustic signal, can transmit sound far away, and can be generally used for hands-free calling, playing music, playing audio in video, and the like. When the speaker 240 is used to play sound, the sound can be transmitted far enough to be heard by others, which may also be referred to as the terminal device being in a play-out mode. The speaker 240 is generally disposed at the bottom of the electronic device 200 with its sound outlet located at the side of the electronic device 200, or disposed at the middle of the electronic device 200 with its sound outlet located at the back of the electronic device 200.

The specific structures of the earpiece 230 and the speaker 240 are substantially the same, and both generate vibration by the acting force of the voice coil and the magnet, so as to drive the diaphragm to vibrate and generate sound. The earpiece 230 and the speaker 240 may be collectively referred to as an output unit in the embodiments of the present application.

Fig. 4 shows a schematic exploded view of the electronic device 200 in fig. 3. As shown in fig. 4, the housing 210 includes a middle frame 211 and a rear case 212, and the middle frame 211 is located between the display screen 220 and the rear case 212. The earpiece 230 and speaker 240 may be secured to the center frame 211. The middle frame 211 and/or the rear case 212 are provided with a sound guide channel (not shown in the figure) for guiding sound out of the electronic device 200. The middle frame 211 may also be provided with other components such as a Printed Circuit Board (PCB) and a Central Processing Unit (CPU), which are not shown and described herein.

As shown in fig. 3 and 4, the speaker mode adopted by the electronic device 200 is a top-bottom type, that is, one sound emitting unit is disposed on each of the upper portion and the lower portion of the electronic device 200 (i.e., the top and the bottom of the electronic device 200). Thus, when the electronic device 200 is used horizontally, the upper earphone 230 can play the left channel content on the same side as the left ear of a person, the lower speaker 240 can play the right channel content on the same side as the right ear of a person, and the electronic device 200 can realize left and right stereo output when used horizontally. However, in the up-down speaker mode, stereo playback can only be achieved in a landscape playback scenario, and in a portrait playback scenario, because the left and right channels are not distinguished between the receiver 230 and the speaker 240, the played sound content is the same, or only the speaker 240 sounds, stereo playback cannot be achieved. At present, more and more users use short videos and live broadcast applications, the short video platforms and the live broadcast platforms adopt a vertical screen format to increase traffic flow and immersion, accordingly, the number of vertical screen playing scenes is more and more, and the users have the requirement of improving tone quality under the vertical screen playing scene.

The embodiment of the application provides a terminal device, which can realize stereo output in both a horizontal screen playing scene and a vertical screen playing scene, solves the problem that the vertical screen playing scene does not have stereo at present, and improves user experience.

Fig. 5 shows a schematic structural diagram of a terminal device provided in an embodiment of the present application, and fig. 5 is a front plan view of the terminal device. As shown in fig. 5, the terminal device 300 includes a first sound emitting unit 310, a second sound emitting unit 320, and a third sound emitting unit 330, the first sound emitting unit 310 and the second sound emitting unit 320 are arranged in the width direction of the terminal device 300 and are located at one end of the terminal device 300 in the length direction, and the third sound emitting unit 330 is located at the other end of the terminal device 300 in the length direction. Illustratively, in fig. 5, the terminal device 300 is vertically placed, the length direction of the terminal device 300 is the up-down direction of the paper, and the width direction is the left-right direction of the paper. The first sound emitting unit 310 and the second sound emitting unit 320 are respectively disposed on the left and right sides of the terminal device 300, and are close to the top of the terminal device 300, for example, the first sound emitting unit 310 is located at the upper left corner of the terminal device 300, and the second sound emitting unit 320 is located at the upper right corner of the terminal device 300. The third sound emitting unit 330 is disposed at the bottom of the terminal device 300, for example, at the lower left corner, the lower right corner, the middle bottom, etc. of the terminal device 300.

The first sound emitting unit 310, the second sound emitting unit 320, and the third sound emitting unit 330 may be the same or different, and the embodiment of the present application is not particularly limited. For example, the first sound emitting unit 310, the second sound emitting unit 320, and the third sound emitting unit 330 are all speakers, or the first sound emitting unit 310 and the second sound emitting unit 320 are earphones, and the third sound emitting unit 330 is a speaker. It should be understood that the positions of the first sound emitting unit 310, the second sound emitting unit 320 and the third sound emitting unit 330 in the drawing are merely exemplary. Preferably, the first sound emitting unit 310 and the second sound emitting unit 320 are at the same distance from the top end of the terminal device 300, that is, the first sound emitting unit 310 and the second sound emitting unit 320 are flush in the width direction of the terminal device 300.

Four sides of the terminal device 300 are respectively provided with a left sound emitting hole 311, a right sound emitting hole 321, a lower sound emitting hole 331 and an upper sound emitting hole 341, the left sound emitting hole 311 and the right sound emitting hole 321 are respectively arranged on two sides of the terminal device 300 along the length direction, the left sound emitting hole 311 is located on the left side of the terminal device 300, and the right sound emitting hole 321 is located on the right side of the terminal device 300. The lower sound emitting hole 331 and the upper sound emitting hole 341 are respectively provided on both sides of the terminal 300 in the width direction, the lower sound emitting hole 331 is located at the lower end of the terminal 300, and the upper sound emitting hole 341 is located at the upper side of the terminal 300. Illustratively, in fig. 5, the terminal device 300 is vertically placed, the left sound emitting hole 311 and the right sound emitting hole 321 are respectively disposed at the left and right sides of the terminal device 300, and the upper sound emitting hole 341 and the lower sound emitting hole 331 are respectively disposed at the upper and lower ends of the terminal device 300. It should be understood that the left sound emitting hole in the embodiment of the present application is not limited to one, the sound emitting holes disposed on the left side of the terminal device are all referred to as left sound emitting holes, and the right sound emitting hole, the lower sound emitting hole and the upper sound emitting hole are similar and will not be described in detail.

The sound outlet of the terminal 300 may be disposed at the front edge of the terminal 300, may be disposed at the side of the terminal 300, or may be disposed at the back edge of the terminal 300. The position, size, etc. of the sound holes in the drawings are merely exemplary. Preferably, the left sound emitting hole 311 and the right sound emitting hole 321 are the same distance from the top end of the terminal device 300, i.e., the left sound emitting hole 311 and the right sound emitting hole 321 are level in the width direction of the terminal device 300.

A left sound guiding channel 312 is disposed between the first sound emitting unit 310 and the left sound emitting hole 311, and a first upper sound guiding channel 313 is disposed between the first sound emitting unit 310 and the upper sound emitting hole 341. A right sound guiding channel 322 is disposed between the second sound emitting unit 320 and the right sound emitting hole 321, and a second upper sound guiding channel 323 is disposed between the second sound emitting unit 320 and the upper sound emitting hole 341. A lower sound guiding channel 332 is disposed between the third sound emitting unit 330 and the lower sound emitting hole 331. As seen in fig. 5, the first sound emitting unit 310 is respectively connected to the sound emitting holes at the left side and the upper end of the terminal 300, namely, a left sound emitting hole 311 and an upper sound emitting hole 341, through a left sound guiding channel 312 and a first upper sound guiding channel 313; the second sound emitting unit 320 is respectively communicated to the sound emitting holes at the right side and the upper end of the terminal device 300, namely, a right sound emitting hole 321 and an upper sound emitting hole 341, through a right sound guiding channel 322 and a second upper sound guiding channel 323; the third sound emitting unit 330 is connected to a sound emitting hole at the lower end of the terminal device 300, i.e., a lower sound emitting hole 331, through a lower sound guiding passage 332. In other words, the first sound emitting unit 310 may emit sound from the left side and/or the upper end of the terminal device 300, and the second sound emitting unit 320 may emit sound from the right side and/or the upper end of the terminal device 300. In the embodiment of the present application, the first upper sound guiding channel 313 and the second upper sound guiding channel 323 guide the sound to the upper sound outlet 341, and therefore, in some embodiments, the first upper sound guiding channel 313 and the second upper sound guiding channel 323 may be collectively referred to as an upper sound guiding channel. In short, the upper sound guiding channel in the embodiment of the present application is used to guide sound to the upper direction of the terminal device, the lower sound guiding channel is used to guide sound to the lower direction of the terminal device, the left sound guiding channel is used to guide sound to the left direction of the terminal device, and the right sound guiding channel is used to guide sound to the right direction of the terminal device. It should be understood that the left leading tone channel in the embodiment of the present application is not limited to one, and the leading tone channels for leading out the sound to the left direction of the terminal device are all referred to as left leading tone channels, and the upper leading tone channel, the right leading tone channel, and the lower leading tone channel are similar and will not be described again.

It should be understood that the shape and position of the sound guiding channel in the drawings are merely exemplary, and in practical applications, the sound guiding channel may be formed by the sound guiding unit itself, or may be formed by enclosing the sound guiding unit and other components in the terminal device, and the embodiment of the present application is not particularly limited.

In the embodiment of the present application, the first sound emitting unit 310/the second sound emitting unit 320 are respectively communicated to the two sound emitting holes through the two sound guiding channels, and any one of the two sound guiding channels can be in an open or closed state, so that when the first sound emitting unit 310/the second sound emitting unit 320 works, sound is controlled to emit sound from at least one of the two sound emitting holes. The following description is made in conjunction with a usage scenario of a terminal device.

It should be noted that, when the first sound emitting unit 310/the second sound emitting unit 320 work, both the two sound guiding channels are closed, so that both the two sound emitting holes are not sounding, which has no practical application meaning, and the present application is not described in more detail. Because the first sound emitting unit 310/the second sound emitting unit 320 can be directly driven to work under the condition, no sound is emitted from the two sound emitting holes no matter the sound guide channel is opened or closed. It should be understood that whether the sound guide channel is used for guiding sound, whether the sound outlet hole is used for outputting sound and whether the sound outlet unit works are related, and in the embodiment of the present application, when the sound guide channel is discussed to be opened or closed, the description is given under the condition that the corresponding sound outlet unit works. In addition, when not specifically described, whether or not a certain sound emitting unit plays a sound is not described, and it can be understood that the sound emitting unit does not operate. When a certain sound output unit only has one sound guide channel, the sound output unit works, namely the corresponding sound guide channel is opened, and when the sound output unit does not work, the corresponding sound guide channel is closed.

Fig. 6 shows a sound emission diagram of the sound emission unit in a vertical screen audio/video playing scene. In the embodiment of the application, the vertical screen audio/video playing scene can be regarded as that the terminal device is in a vertical screen playing mode. As shown in fig. 6, when a user vertically uses the terminal device for audio/video playback, the first sound emitting unit 310 plays a left channel, and the second sound emitting unit 320 plays a right channel.

In one implementation, the left and right leading tone channels 312, 322 are in an open state and the first and second upper leading tone channels 313, 323 are in a closed state. Thus, the first sound emitting unit 310 and the second sound emitting unit 320 respectively play the left and right sound channels and emit sound from the left and right sound emitting holes through the left and right sound guide channels. The stereo output can be realized when the user vertically uses the terminal equipment to play the audio/video, and the user experience is improved. That is, when the terminal device is used for playing stereo sound vertically, the terminal device (specifically, the processing unit of the terminal device) controls the first sound emitting unit 310 and the second sound emitting unit 320 to play different side channels ((e.g., left channel and right channel), the left and right sound guide channels are opened, the first upper sound guide channel 313 and the second upper sound guide channel 323 are closed, the first sound emitting unit 310 only emits sound from the left sound emitting hole 311, the second sound emitting unit 320 only emits sound from the right sound emitting hole 321, so that the left channel and the right channel do not emit sound from the upper sound emitting hole 341, and crosstalk between the left channel and the right channel can be prevented, the sound is output from the left and right sound output holes through the left and right sound guide channels, and the left and right channels are converged by the first upper sound guide channel 313 and the second upper sound guide channel 323 to output the sound from the upper sound output hole 341. The stereo output can be realized when the user vertically uses the terminal equipment to play the audio/video, and the user experience is improved. The first upper sound guiding channel 313 and the second upper sound guiding channel 323 respectively guide the left sound channel and the right sound channel, and are converged at the upper sound outlet 341, so that the effect of mixed output of the left sound channel and the right sound channel can be achieved.

In short, when the terminal device is used for playing stereo on a vertical screen, the terminal device (specifically, the processing unit of the terminal device) may control the first sound emitting unit 310 and the second sound emitting unit 320 to play different side channels (for example, a left channel and a right channel) and emit sound from the left sound emitting hole 311 and the right sound emitting hole 321.

Optionally, when the first sound emitting unit 310 plays the left channel and the second sound emitting unit 320 plays the right channel, the third sound emitting unit 330 may play sound or may not play sound. When the third sound output unit 330 plays sound, sound field and bass enhancement can be performed, for example, the third sound output unit 330 can play a bass sound channel to increase the shocking sensation, or play a center sound channel to enhance the energy intensity and saturation of the human voice, so that the left and right sound channels are joined smoothly. When the third sound output unit 330 does not play sound, the sound is naturally not output from the down-leading sound channel 332, so that the power consumption of the terminal can be reduced.

Fig. 7 shows a sound emission diagram of the sound emission unit in the landscape audio/video playing scene. In the embodiment of the application, the transverse screen audio/video playing scene can be regarded as that the terminal device is in a transverse screen playing mode. As shown in fig. 7, when the user transversely uses the terminal device for audio/video playback, the first sound emitting unit 310 and the second sound emitting unit 320 play one side channel of the two channels, and the third sound emitting unit 330 plays the other side channel of the two channels. As shown in fig. 7 (a), when the top of the terminal device is rotated to the left to be laid horizontally, the first sound emitting unit 310 and the second sound emitting unit 320 play the left channel, and the third sound emitting unit 330 plays the right channel. Or as shown in fig. 7 (b), when the top of the terminal device is turned right and laid out, the first sound emitting unit 310 and the second sound emitting unit 320 play the right channel, and the third sound emitting unit 330 plays the left channel. Therefore, no matter which direction the top of the terminal equipment rotates, the sound output unit on the same side of the left ear of the user can play the left sound channel, and the sound output unit on the same side of the right ear of the user can play the right sound channel, so that the effect that the audio/video played on the horizontal screen is output stereo is achieved. The sounding of the sound unit will be described below by taking the case shown in fig. 7 (a) as an example.

In one possible implementation manner, any one of the first sound emitting unit 310 and the second sound emitting unit 320 plays the left channel, and at least one of the two sound guiding channels connected to the sound emitting unit is in an open state to guide out the content of the left channel. The third sound emitting unit 330 plays the right channel and the down-channel 332 derives the right channel content. Taking the example that the first sound emitting unit 310 plays the left channel, at least one of the left sound guiding channel 312 or the first upper sound guiding channel 313 is in an open state for guiding the left channel content. By selecting one of the first sound emitting unit 310 and the second sound emitting unit 320 to play the sound channel, stereo sound can be realized, and power consumption of the terminal device can be reduced.

In another possible implementation manner, the first sound emitting unit 310 and the second sound emitting unit 320 are both configured to play a left channel, and at least one of the two sound guiding channels connected to each sound emitting unit is in an open state, that is, at least one of the left sound guiding channel 312 and the first upper sound guiding channel 313 guides the left channel content to the outside of the terminal device, and at least one of the right sound guiding channel 322 and the second upper sound guiding channel 323 guides the left channel content to the outside of the terminal device. Specifically, several sound guide channels can be selectively opened according to the volume of the sound output unit. Illustratively, the left leading tone channel 312 and the right leading tone channel 322 are in an open state, and the first upper leading tone channel 313 and the second upper leading tone channel 323 are also in an open state. The first sound emitting unit 310 and the second sound emitting unit 320 play the left channel, and emit the content of the left channel from the left sound emitting hole 311, the right sound emitting hole 321, and the upper sound emitting hole 341. The third sound emitting unit 330 emits the right channel, and emits the content of the right channel from the lower sound emitting hole 331. When a user transversely uses the terminal equipment to play sound/video, the three sound output units work, the four sound output holes all output sound, the three sound output holes in the top of the terminal equipment are simultaneously used for playing a unilateral sound channel such as a left sound channel, the sound output hole in the bottom of the terminal equipment plays another unilateral sound channel such as a right sound channel, stereo output can be achieved, and user experience is improved.

Preferably, when the terminal device is used for playing stereo on a landscape screen, the sound played by the first sound emitting unit 310 and/or the second sound emitting unit 320 is only sounded through the upper sound emitting hole, and the sound played by the third sound emitting unit 330 is sounded through the lower sound emitting hole, so that the stereo effect is better.

In short, when the terminal device is used for playing stereo on a landscape screen, the terminal device (specifically, the processing unit of the terminal device) may control the first sound emitting unit 310 and/or the second sound emitting unit 320 to play one side channel and emit sound from at least one of the left sound emitting hole 311, the right sound emitting hole 321, and the upper sound emitting hole 341, and the terminal device controls the third sound emitting unit 330 to play the other side channel.

Due to the possible difference between the sound output units, when playing stereo, a balanced stereo effect can be achieved through volume and audio parameter compensation and the like.

The most important purpose of terminal equipment besides the use for audio/video playing is communication. When the terminal equipment is used for communication, the voice in the communication process is monaural, so that the voice played by the voice output units is monaural, and the voice contents played by the voice output units are the same. In this embodiment, when the terminal device is used for a call, the terminal device (specifically, the processing unit of the terminal device) may control a playing mode of the sound output unit according to a call mode. The talk mode includes a handset mode and a play out mode. Fig. 8 shows a sound emission diagram of the sound emission unit in the vertical screen call scene.

Fig. 8 (a) is a vertical screen voice call scene, in which only the upper sound outlet 341 is in sound, and other sound outlets are not in sound, and it can be considered that the terminal device is in an earpiece mode, so as to ensure privacy of call contents.

In one possible implementation manner, any one of the first sound emitting unit 310 and the second sound emitting unit 320 is operated, and the third sound emitting unit 330 is not operated. Taking the first sound emitting unit 310 as an example, the left sound guiding passage 312 is in a closed state, and the first upper sound guiding passage 313 is in an open state, so that only the upper sound emitting hole 341 emits sound. Since the second sound emitting unit 320 does not operate, the right leading sound passage 322 and the second upper leading sound passage 323 may be in an open state and may be in a closed state.

In another possible implementation manner, the first sound emitting unit 310 and the second sound emitting unit 320 are both operated, and the third sound emitting unit 330 is not operated. The left and right sound guiding passages 312 and 322 are in a closed state, and the first and second upper sound guiding passages 313 and 323 are in an open state, so that the sounds generated from the first and second sound outputting units 310 and 320 are transmitted through the upper sound guiding passages and then output only from the upper sound outputting holes 341.

In short, when the terminal device is in the earpiece mode, the terminal device (specifically, the processing unit of the terminal device) may control the first sound emitting unit 310 and/or the second sound emitting unit 320 to play a monaural voice, and only emit a sound from the upper sound emitting hole 341.

Fig. 8 (b) is a vertical screen video call scene, which is similar to a vertical screen audio/video playing scene, in which at least one of the three audio output units operates to play the voice content during the video call. As shown, for example, the first sound output unit 310 and the second sound output unit 320 may operate, and accordingly, at least one of the left sound guide channel 312, the right sound guide channel 322, the first upper sound guide channel 313 and the second upper sound guide channel 323 is in an open state. Also, for example, only the third sound emitting unit 330 may operate, and accordingly, the lower sound guiding channel 332 guides the sound out of the terminal device, so that only the lower sound emitting hole 331 emits the sound and the other sound emitting holes do not emit the sound. When the terminal equipment is in the play-out modes such as call hands-free mode, voice call and the like, the sound production condition of the sound output unit is the same as that of the vertical screen video call scene, and the description is omitted.

In short, when the terminal device is in the play-out mode, the terminal device (specifically, the processing unit of the terminal device) may control at least one of the first sound emitting unit 310, the second sound emitting unit 320, and the third sound emitting unit 330 to play the monaural voice.

Optionally, when the terminal is used for a call in a play-out mode, a plurality of sound emitting units in the first sound emitting unit 310, the second sound emitting unit 320, and the third sound emitting unit 330 may play a sound, so that when a certain sound emitting unit is shielded by a shielding object, other sound emitting units may also emit a sound.

Optionally, the sound emitting unit for emitting sound in the play mode can be customized or selected by a user, so that when a certain sound emitting unit is damaged, sound can be emitted through other sound emitting units.

Each of the left leading sound channel 312, the right leading sound channel 322, the first upper leading sound channel 313 and the second upper leading sound channel 323 described above has two states of being opened or closed. In other embodiments, the first upper sound guiding channel 313 and the second upper sound guiding channel 323 may be in a normally open state. In this way, no matter what usage state the terminal device is in, the upper sound emitting hole 341 will emit sound as long as the first sound emitting unit 310 and/or the second sound emitting unit 320 emits sound. The opening and closing conditions of other sound guide channels are the same as those described above, and are not described again for brevity.

In order to control the open-close state of the sound guide channel, the terminal device in the embodiment of the application further comprises a control component, and the control component is used for controlling the open-close state of the sound guide channel between the sound output unit and the sound output hole. Fig. 9 shows a schematic diagram of a control assembly provided by an embodiment of the present application. As shown in fig. 9 (a), the control component 350 may be disposed on all the sound guide channels of the first sound output unit 310 and the second sound output unit 320 communicating with the outside of the terminal, that is, each of the left sound guide channel 312, the right sound guide channel 322, the first upper sound guide channel 313 and the second upper sound guide channel 323 configures the control component 350. In some embodiments, as shown in fig. 9 (b), the control component 350 may also be disposed on the sound guiding channel between the first sound emitting unit 310 and the left sound emitting hole 311, and the sound guiding channel between the second sound emitting unit 320 and the right sound emitting hole 321, that is, only the left sound guiding channel 312 and the right sound guiding channel 322 configure the control component 350, and the first upper sound guiding channel 313 and the second upper sound guiding channel 323 may be in a normally open state. The control assembly 350 includes a signal receiving module 351 and a movable element 352, wherein the signal receiving module 351 can receive an instruction issued by a central processing unit or the like, and further control the movement of the movable element 352, so as to open or close the sound guide channel. The signal receiving module 351 can receive commands in the form of power-on or power-off, high level or low level, binary command "1" or "0", etc., so as to control the corresponding movement of the movable part. Optionally, the control component 350 may include a magnetic attraction power-on unit/module, the power-on or power-off state of the magnetic attraction power-on module may control the movable element 352 to close or open the sound guide channel, and the current in the magnetic attraction power-on module may control the degree of opening the sound guide channel by the movable element 352, so as to control the volume of the sound emitted from the sound outlet.

Fig. 10 shows a schematic structural diagram of a control assembly provided in an embodiment of the present application. As shown in fig. 10, the control element 350 includes a signal receiving module 351, a movable member 352 and a fixed member 353, and the control element 350 is a magnetic attraction and electrification module. The signal receiving module 351 is used for receiving a control signal (e.g., an electrical signal). The signal receiving module 351 includes a coil that is capable of generating a magnetic field when energized. Moveable member 352 may be a magnet with magnetic properties such that upon energization of the coil, moveable member 352 is able to move relative to the magnetic field. The movable member 352 includes a movable member main body 3521 and a movable through hole 3522 formed in the movable member main body 3521. The fixed member 353 is fixed relative to the movable member 352, and the fixed member 353 includes a fixed member main body 3531 and a fixed through hole 3532 formed in the fixed member main body 3531. When the coil is energized, the control module 351 and the movable element 352 generate repulsive or attractive acting force, so that the movable element 352 and the fixed element 353 move relatively, and the fixed through hole 3532 is communicated with or staggered with the movable through hole 3522, thereby realizing the opening or closing of the sound guide channel. For example, as shown in fig. 10 (a), it is assumed that when the coil is not energized, the movable member 352 is located at an initial position, and the movable through hole 3522 communicates with the fixed through hole 3532, so that the sound emitted from the sound emitting unit can be guided out through the movable through hole 3522 and the fixed through hole 3532. As shown in fig. 10 (b), when the coil is energized, the movable element 352 moves relative to the fixed element 353, the movable through hole 3522 is offset from the fixed through hole 3532, the movable through hole 3522 is blocked by the fixed element main body 3531, and the fixed through hole 3532 is blocked by the movable element main body 3521, so that the sound emitted from the sound emitting unit cannot be guided out through the movable through hole 3522 and the fixed through hole 3532, and cannot be guided out from the sound guide channel. In addition, the current passing through the coil can control the magnetic field generated by the coil, so as to control the relative movement distance between the movable element 352 and the fixed element 353, and control the area of the through hole of the movable element communicated with or staggered with the through hole of the fixed element, so that the control element 350 can also play a role in adjusting the sound volume of the sound outlet hole.

The on-off state of the magnetic attraction power-on module can be controlled by means of diodes, triodes, high and low levels and the like, and in practical application, technical personnel in the field can carry out corresponding design according to requirements, and the embodiment of the application is not specifically limited.

It should be understood that in some other embodiments, the control assembly 350 can also control the opening and closing of the sound guide channel by other means, such as providing a bendable element on the sound guide channel, the bendable element closing the sound guide channel when the bendable element is at an angle, such as 90 °, and opening the sound guide channel when the bendable element is open. If the lifting piece is arranged on the sound guide channel, the sound guide channel is opened when the lifting piece is lifted, and the sound guide channel is closed when the lifting piece falls down. The corresponding control components can be designed by those skilled in the art according to specific requirements and will not be described in detail herein.

The terminal equipment that this application embodiment provided includes three sound units, and wherein two sound units that go out at terminal top compromise the stereo sound of vertical screen when can replacing original earphone, can also realize the stereo sound of horizontal screen with the sound unit that goes out of bottom when terminal equipment horizontal screen uses. Compare current terminal equipment, the terminal equipment that this application embodiment provided has only increased a sound unit, can solve the terminal equipment and erect the problem that the screen broadcast does not have stereo as far as possible under the prerequisite in high-efficient utilization space, has realized that horizontal screen broadcast can both output stereo with erecting the screen broadcast. Furthermore, the sound guide channel is controlled to be opened and closed through the control assembly, different sounding schemes are provided according to different using scenes of a user, and the audio playing requirements under various scenes can be met.

Fig. 11 shows a schematic structural diagram of another terminal device provided in an embodiment of the present application. Unlike the terminal device 300 shown in fig. 5, in the terminal device 400 shown in fig. 11, the first sound emitting unit 310 and the second sound emitting unit 320 are both disposed at the top middle position of the terminal device. Like the terminal device 300, the first sound emitting unit 310 can emit sound from the left side of the terminal device by communicating with the outside through the left sound guide passage 312 and the left sound emitting hole 311, and the second sound emitting unit 320 can emit sound from the right side of the terminal device by communicating with the outside through the right sound guide passage 322 and the right sound emitting hole 321. The first sound output unit 310 and the second sound output unit 320 can output sound from the upper side of the terminal device through the first upper sound guiding channel 313 and the second upper sound guiding channel 323 respectively. The first and second upper sound guiding channels 313, 323 may or may not have a space therebetween. The third sound emitting unit 330 may be disposed at any position of the bottom of the terminal device, such as the bottom corner shown in fig. 5 or the bottom middle position shown in fig. 11. Other references may refer to the description of the terminal device 300, and are not repeated herein. It should be understood that the first sound emitting unit 310, the second sound emitting unit 320 and the third sound emitting unit 330 in the embodiment of the present application may be disposed at any position on the terminal as long as the three sound emitting units can communicate with the corresponding sound emitting holes.

The sound outlet hole can be formed in the front face of the terminal device, and can also be formed in the side face of the terminal device in order to improve screen occupation ratio. The third sound emitting unit 330 in fig. 11 is briefly introduced as an example.

Fig. 12 shows a schematic cross-sectional view of the third sound emitting unit 330 in fig. 11 taken along the sectional line a-a. Illustratively, the third sound emitting unit 330 is a moving-coil speaker. As shown in fig. 12, the third sound emitting unit 330 includes a magnet 301, a voice coil 302, a diaphragm 303, a diaphragm flange 304, a magnetic bowl 305, a frame 306, and a housing 307. The magnet 301 is arranged in the magnetic bowl 305, and a gap exists between the magnet 301 and the inner wall of the magnetic bowl 305. The voice coil 30 is inserted into a gap between the magnet 301 and the inner wall of the magnetic bowl 305. The voice coil 302 is connected to the frame 306 via a diaphragm edge 304, and the edge of the diaphragm 303 is connected to the diaphragm edge 304. When the voice coil 302 is energized, a magnetic field is generated, and interacts with the magnetic field of the magnet 301, the voice coil 302 drives the diaphragm 303 to vibrate, and the diaphragm corrugated rim 304 can ensure that the voice coil 302 moves along the axial direction of the third sound emitting unit 330 and restricts the transverse movement. The magnetic bowl 305 may act as a magnetic barrier. The shell 307 can be used for fixing the basin stand 306, the shell 307 and the magnetic bowl 305 form a rear sound cavity 309, and the rear sound cavity 309 is sealed and used for correcting low-frequency sound signals. The third sound emitting unit 330 may be disposed on the middle frame 211, and the middle frame 211 is slotted to form a front sound cavity with the diaphragm 303 of the third sound emitting unit 330, i.e. the lower sound guiding channel 332. The middle frame 211 is opened at the side of the terminal device to form a lower sound outlet 331, and the lower sound outlet 331 is an outlet of the lower sound guide passage 332 extending to the side of the terminal.

It should be understood that fig. 12 only illustrates an exemplary structure of the sound emitting unit and a position of the sound emitting hole, and those skilled in the art may modify the structure of the sound emitting unit according to specific requirements or select a suitable position of the sound emitting hole, and the embodiment of the present application is not limited in particular.

Fig. 13 is a schematic diagram illustrating an operation logic of controlling play of an outbound unit according to an embodiment of the present application. The following is described in detail with reference to fig. 6 to 8 and fig. 13.

Firstly, when the sound output unit of the terminal device does not work, each sound output unit on the terminal device can be in an initial state. For example, the left leading sound channel 312 and the first upper leading sound channel 313 which are communicated with the first sound outlet unit 310 are both in an open state; the right leading sound channel 322 and the second upper leading sound channel 323 which are communicated with the second sound outlet unit 320 are both in an open state; the lower sound guiding channel 332 communicated with the third sound emitting unit 330 is not provided with a control component, so that the lower sound guiding channel is always in an open state, and whether the lower sound guiding channel transmits sound is synchronous with whether the third sound emitting unit 330 works and emits sound.

In step S410, the terminal device determines a user usage scenario.

In the embodiment of the application, the user usage scene is a scene in which the sound needs to be played by using the sound output unit, such as an audio and video playing scene, a call scene and the like, wherein the call scene may include a voice call scene, a video call scene, a ringtone prompt and the like. The method for playing the sound by the sound output unit in the call scene can also be divided into an external playing mode and an earphone mode, wherein the external playing mode can also be called a loudspeaker mode, and the sound is relatively large; the earpiece mode sound is small and can generally be heard close to the ear. The mode of playing sound by the sound output unit in the audio/video playing scene is generally the play-out mode.

The terminal device can determine the use scene according to the user operation instruction. For example, a user clicks a play control in an audio/video playing application, and the terminal device may determine that the user wants to use the terminal device to perform audio playing or video playing according to a signal or an instruction corresponding to the play control. For another example, when the terminal device receives an incoming call, the terminal device may determine that the user is using the terminal device to make a call according to a communication request or a signal or an instruction corresponding to the user clicking the answering control. For another example, when the user clicks a control for a voice call or a video call in a real-time communication application or clicks a control for receiving the voice call or the video call, the terminal device may determine that the user is using the terminal device for the voice call or the video call according to a signal or an instruction corresponding to the user clicking the control.

In one case, the terminal device determines that the user usage scene is an audio/video playback scene. When a user uses the terminal equipment to play audio or video content, stereo output can be realized no matter the terminal equipment is used in a horizontal screen or a vertical screen, and therefore better user experience is brought.

If the stereo is to be realized, the horizontal/vertical screen state of the terminal equipment needs to be detected, and the working mode of the sound unit is determined according to the posture of the terminal equipment. Therefore, in step S420, the terminal device detects the landscape/portrait screen status.

When the terminal device is in the landscape state, the upper portion of the terminal device may be on the same side as the left ear of the user or on the same side as the right ear of the user. Therefore, in step S421, the placement direction of the terminal device is detected. In the embodiment of the present application, the use state information such as the landscape/portrait screen state and the placement direction of the terminal device may be acquired by a sensor, for example, a gravity sensor or a direction sensor (e.g., a gyroscope). For example, the attitude information of the terminal device, including landscape screen, portrait screen or tilt angle, can be obtained through the gravity sensor, and the orientation state of the terminal device, such as upright, inverted, left landscape, right landscape, elevation, depression state, and sensing azimuth angle, rotation angle and tilt angle on the horizontal plane, can be obtained through the orientation sensor.

It should be understood that steps S410, S420, and S421 are performed independently of each other, and in the embodiment of the present application, there is no limitation on the order of steps S410, S420, and S421. For example, the detection of the landscape/portrait screen status in step S420 and the detection of the placement direction in step S421 may be performed simultaneously, that is, the landscape/portrait screen status and the placement direction information of the terminal device are acquired simultaneously. Also, the detection of the landscape/portrait state as in step S420 and the detection of the placing direction as in step S421 may be performed before or simultaneously with the determination of the user using the scene in step S410. For example, the terminal device may obtain the horizontal and vertical screen status information at the same time by detecting the placement direction, so that steps S420 and S421 may be combined into one step. The embodiment of the present application only exemplarily describes information that needs to be acquired for determining the operation mode of the sound unit in different usage scenarios.

After the attitude and the placing direction of the terminal equipment are determined, the working mode of the sound unit and the opening and closing state of the sound guide channel are determined. In step S422, all the sound guide channels are opened, the first sound output unit 310 and the second sound output unit 320 are set as the left/right channel according to the placement direction of the terminal device, and the third sound output unit 330 is set as the other side channel. Specifically, if the upper portion of the terminal device is on the same side as the left ear of the user, the first sound output unit 310 and the second sound output unit 320 are set as a left channel, and the third sound output unit 330 is set as a right channel; if the upper part of the terminal device is on the same side as the right ear of the user, the first sound emitting unit 310 and the second sound emitting unit 320 are set as the right channel, and the third sound emitting unit 330 is set as the left channel.

Opening all the lead channels here means that all the lead channels are able to transmit sound. Specifically, the control component may receive an instruction from, for example, the central processing unit CPU and then control the left leading sound channel 312 and the first upper leading sound channel 313 to open, so that the sound emitted by the first sound emitting unit 310 may be emitted to the outside of the terminal device through the left leading sound channel 312 and the first upper leading sound channel 313. The control component may receive an instruction from, for example, the central processing unit CPU, and then control the right leading sound channel 322 and the second leading sound channel 323 to open, so that the sound emitted from the second sound emitting unit 320 can be emitted to the outside of the terminal device through the right leading sound channel 322 and the second leading sound channel 323. The down sound channel 332 is actually always in an open state, and when the third sound emitting unit 330 is operated, the sound emitted from the third sound emitting unit 330 is emitted to the outside of the terminal device through the down sound channel 332.

Therefore, when the terminal equipment plays audio/video in a cross screen mode, the audio output unit on the same side of the left ear of the user plays the left audio channel, and the audio output unit on the same side of the right ear of the user plays the right audio channel in a single mode, and therefore cross screen stereo playing can be achieved.

Optionally, in order to improve user experience, in step S423, preset audio parameters may be applied to ensure balance of the left and right channels, so that the volumes of the left and right channels heard by the user are consistent.

It should be understood that in step S422, there may be other arrangements of the number of active sound output units and the number of open sound leading channels. For example, only the first upper sound-guiding channel 313, the second upper sound-guiding channel 323 and the lower sound-guiding channel 332 are open; or only the left leading tone channel 312, the right leading tone channel 322 and the lower leading tone channel 332 are open; or only the first sound emitting unit 310 or only the second sound emitting unit 320 plays one side channel, the third sound emitting unit 330 plays the other side channel, etc. Reference may be made to the above description for brevity, and further description is omitted.

When the terminal device is in the vertical screen state, the upper portion of the terminal device may face upward, and the first sound emitting unit 310 is located on the same side of the left ear of the user, and the second sound emitting unit 320 is located on the same side of the right ear of the user. The upper part of the terminal device may also face downwards, in which case the first sound emitting unit 310 is located on the same side of the user's right ear and the second sound emitting unit 320 is located on the same side of the user's left ear. Therefore, in step S424, the placement direction of the terminal device is detected, i.e., whether the terminal device is standing upright or upside down is detected.

After the attitude and the placing direction of the terminal equipment are determined, the working mode of the sound unit and the opening and closing state of the sound guide channel are determined. In step S425, the left and right leading tone channels 312 and 322 are opened and the up leading tone channel is closed. The first sound emitting unit 310 plays left/right channels according to the placing direction, the second sound emitting unit 320 plays the other channel, and the third sound emitting unit 330 performs sound field and bass enhancement. Specifically, if the upper portion of the terminal device is facing upward, the first sound output unit 310 is used to play the left channel, the second sound output unit 320 is used to play the right channel, and the third sound output unit 330 is used to perform sound field and bass enhancement. If the upper portion of the terminal device faces downward, the first sound output unit 310 is used to play the right channel, the second sound output unit 320 is used to play the left channel, and the third sound output unit 330 is used to perform sound field and bass enhancement. Here the upper sound guiding channel closing comprises a first upper sound guiding channel 313 and a second upper sound guiding channel 323 closing. Thus, the sound emitted by the first sound emitting unit 310 can be guided out of the terminal device through the left sound guiding channel 312, and the sound emitted by the second sound emitting unit 320 can be guided out of the terminal device through the right sound guiding channel 322; the down sound channel 332 is actually always in an open state, and when the third sound emitting unit 330 is operated, the sound emitted from the third sound emitting unit 330 is emitted to the outside of the terminal device through the down sound channel 332.

Therefore, when the terminal equipment plays audio/video in a vertical screen mode, the audio output unit on the same side of the left ear of the user plays the left audio channel, the audio output unit on the same side of the right ear of the user plays the right audio channel, and vertical screen stereo playing can be achieved. The third sound output unit 330 performs sound field and bass enhancement, so that the sound quality can be improved.

Alternatively, step S424 may not be performed since the user generally uses the terminal device vertically. When the terminal device detects that the terminal device is in the vertical screen state, step S425 is directly executed, the first sound output unit is set to play the left channel, and the second sound output unit is set to play the right channel.

It should be understood that in step S425, there may be other arrangements of the number of active output units and the number of open sound guiding channels. For example, the first upper sound guiding channel 313 and/or the second upper sound guiding channel 323 are also opened, or the third sound outputting unit 330 does not work, and so on, reference may be specifically made to the above-mentioned related description, and for brevity, no further description is given.

In another case, the terminal device determines that the user usage scenario is a call scenario. When a user uses the terminal device to make a call, the sound output unit generally plays a single sound channel, but the sound output unit is divided into an earphone mode and a play-out mode. After determining that the user usage scenario is a call scenario, in step S430, a call mode, i.e., a handset mode/play-out mode, is detected.

When the terminal device is in the earpiece mode, step S431 is executed, the left leading tone channel 312 and the right leading tone channel 322 are closed, the up leading tone channel is opened, and the first sound output unit 310 and the second sound output unit 320 play the monaural sound. Here, the upper sound leading channel being open includes the first upper sound leading channel 313 and the second upper sound leading channel 323 being open. Thus, when a call is made in the earpiece mode, the first sound output unit 310 and the second sound output unit 320 play the voice and output the voice to the outside of the terminal device through the first upper sound guide channel 313 and the second upper sound guide channel 323. When the terminal device is in the earpiece mode, the sound volume emitted by the first sound emitting unit 310 and the second sound emitting unit 320 may not exceed the volume threshold, so as to ensure the privacy of the user's conversation.

In some embodiments, step S430 may not be performed. For example, when the user answers or makes a call, the terminal device may default to an earphone mode, and when the user clicks a play control, etc., the terminal device switches the working sound output unit in response to the user operation. Or when the user uses a specific communication software to perform a voice call or a video call, the terminal device may be in the play-out mode by default according to the communication software used by the user.

When the terminal device is in the play-out mode, step S432 is executed, and the third sound output unit 330 plays the monaural sound and outputs the monaural sound to the outside of the terminal device through the down-leading channel 332. Since the first sound emitting unit 310 and the second sound emitting unit 320 do not emit sound, the sound guiding channel communicated with the first sound emitting unit 310 and the second sound emitting unit 320 may be in an open state or a closed state. It should be understood that when the terminal device is in the play-out mode, the first sound output unit 310 and/or the second sound output unit 320 may also be selected to play the monaural sound, or all the sound output units may play the monaural sound, and those skilled in the art may design accordingly according to specific requirements, and are not limited herein.

The terminal equipment that this application embodiment provided includes three sound unit, set up in the first sound unit and the second sound unit of going out on terminal equipment upper portion promptly, and set up in the third sound unit of terminal equipment bottom, first sound unit of going out leads sound passageway and first last sound passageway and is located the left sound hole of terminal equipment through a left side respectively, it is linked together to be located the last sound hole of terminal equipment upper end, the second sound unit of going out leads sound passageway and second respectively through a right side and goes out the sound hole with the right side that is located the terminal equipment right side on the sound passageway, it is linked together to be located the last sound hole of terminal equipment upper end. When the terminal equipment is used in a vertical screen mode, the first sound output unit and the second sound output unit can respectively play the left sound channel and the right sound channel, and therefore vertical screen stereo output is achieved. When the terminal equipment is used in a landscape screen, the first sound output unit and/or the second sound output unit can play one side channel of the two channels, and the third sound output unit plays the other side channel of the two channels, so that the landscape screen stereo output is realized. In addition, the terminal device of the embodiment of the application can also provide different sound production schemes according to the use scene of the user, for example, when the terminal device is in an audio and video playing scene, the terminal device can provide stereo playing, and the user experience is improved; when the mobile phone is in a conversation scene, the first sound output unit and/or the second sound output unit can be used for playing sound, and the sound is transmitted through the upper sound guide channel, so that the function of the receiver is realized, and the conversation privacy is ensured.

Fig. 14 shows a schematic structural diagram of another terminal device provided in the embodiment of the present application. As shown in fig. 14, the terminal device 500 includes a first sound emitting unit 310, a second sound emitting unit 320, a third sound emitting unit 330, and a fourth sound emitting unit 340, the first sound emitting unit 310 and the second sound emitting unit 320 being arranged in the width direction of the terminal device, the third sound emitting unit 330 and the fourth sound emitting unit 340 being arranged in the length direction of the terminal device.

Illustratively, the first sound emitting unit 310 and the second sound emitting unit 320 are disposed at the left and right ends of the terminal device. The first sound emitting unit 310 and the second sound emitting unit 320 may be disposed near the top of the terminal device 500 to avoid a region frequently held by a user and prevent a sound emitting hole from being blocked by a hand of the user. Preferably, the first sound emitting unit 310 and the second sound emitting unit 320 are flush in the width direction. The third sound emitting unit 330 and the fourth sound emitting unit 340 are disposed at the upper and lower ends of the terminal device. Preferably, the fourth sound output unit 340 is disposed in the middle position of the upper end of the terminal device, so that the sound can directly enter the ear when the user answers the call, and the privacy of the call process is improved.

The terminal device 500 has a left sound emitting hole 311, a right sound emitting hole 322, a lower sound emitting hole 331, and an upper sound emitting hole 341 formed on four sides thereof, and the four sound emitting holes emit sound in the up, down, left, and right directions of the terminal device 500. The first sound emitting unit 310 is connected to the left sound emitting hole 311 through the left sound guiding passage 312, the second sound emitting unit 320 is connected to the right sound emitting hole 321 through the right sound guiding passage 322, the third sound emitting unit 330 is connected to the lower sound emitting hole 331 through the lower sound guiding passage 332, and the fourth sound emitting unit 340 is connected to the upper sound emitting hole 341 through the upper sound guiding passage 342.

The positions of the sound emitting units, the sound guide channels and the sound emitting holes are shown only by way of example, and it should be understood that the skilled person can design the sound guide channels and the sound emitting holes accordingly according to actual requirements.

Fig. 15 is a schematic diagram illustrating an operation logic of controlling play of an outbound unit according to an embodiment of the present application. This is described in detail below in conjunction with fig. 14 and 15. Fig. 15 is similar to the operation logic shown in fig. 13, wherein step S5xx corresponds to step S4xx, and only the differences will be described in detail below, otherwise, reference may be made to the description of fig. 13.

In step S510, a user usage scenario is determined.

In one case, the terminal device determines that the user usage scene is an audio/video playback scene.

In step S520, the terminal device detects the landscape/portrait screen status.

When the terminal device is in the landscape state, in step S521, the placement direction is detected.

In step S522, the third sound output unit is set to the left/right channel and the fourth sound output unit is set to the other side channel according to the placement direction; the first sound output unit is set as an up/down channel, and the second sound output unit is set as another side channel. Specifically, if the upper portion of the terminal device is on the same side as the left ear of the user, the fourth sound output unit 340 is set as the left channel, the third sound output unit 330 is set as the right channel, the second sound output unit 320 is set as the upper channel, and the first sound output unit 310 is set as the lower channel. If the upper portion of the terminal device is on the same side as the right ear of the user, the third sound output unit 330 is set as the left channel, the fourth sound output unit 340 is set as the right channel, the first sound output unit 310 is set as the upper channel, and the second sound output unit 320 is set as the lower channel. The four sound output units work independently, and play the upper, lower, left and right sound channels through the corresponding sound guide channels respectively, so that multi-channel stereo sound can be realized when the transverse screen is used.

The upper and lower channels in the embodiment of the present application may be a sky channel and a ground channel obtained separately during recording, or may be channels separated by algorithm processing.

Optionally, in step S523, preset audio parameters are used to ensure balance of left and right channel playing and balance of up and down channel playing.

Alternatively, when the terminal device is in the landscape state, the first sound emitting unit 310 and the second sound emitting unit 320 may do sound field and bass rendering without playing the upper and lower channels.

When the terminal device is in the portrait screen state, in step S524, the placement direction is detected.

In step S525, the first sound output unit 310 plays the left/right channel according to the placing direction, and the second sound output unit plays the other side channel; the third sound output unit and the fourth sound output unit are used for sound field and bass enhancement. Specifically, if the upper portion of the terminal device is facing upward, the first sound output unit 310 is used to play the left channel, the second sound output unit 320 is used to play the right channel, and the third sound output unit 330 and the fourth sound output unit 340 are used to perform sound field and bass enhancement. If the upper part of the terminal device faces downwards, the first sound output unit 310 is used for playing a right channel, the second sound output unit 320 is used for playing a left channel, and the third sound output unit 330 and the fourth sound output unit 340 are used for sound field and bass enhancement. The four sound output units work independently, wherein the first sound output unit 310 and the second sound output unit 320 play left and right sound channels through respective corresponding sound guide channels, so that multi-channel stereo sound can be realized when the vertical screen is used. The third sound output unit and the fourth sound output unit are used for sound field and bass enhancement, and the sound quality can be improved.

Optionally, when the terminal device is in the portrait state, the third sound output unit and the fourth sound output unit may be used to play the up-down channel without performing sound field and bass rendering.

In another case, the terminal device determines that the user usage scenario is a call scenario.

In step S430, a talk mode, i.e., a handset mode/play-out mode, is detected.

When the terminal device is in the earpiece mode, in step S531, the fourth sound output unit 340 plays the monaural voice, and the other sound output units do not work. The fourth sound emitting unit 340 emits sound through the upper sound emitting hole 341, and the sound emitting volume thereof may not exceed the volume threshold, thereby ensuring the privacy of the user's conversation.

When the terminal device is in the play-out mode, in step S532, the third sound output unit 330 plays the monaural sound, and the other sound output units may or may not play the monaural sound at the same time.

The terminal equipment in this application embodiment is including setting up four sound units that go out about terminal equipment, and every sound unit can independent work, can provide the stereo audio playback of double track or four channels and experience, and the sense is immersed in the reinforcing audio and video broadcast, promotes user experience. For example, when the terminal device plays audio/video content using, for example, a vertical screen, the first audio output unit 310 and the second audio output unit 320 respectively play a left channel and a right channel, and stereo playback can be achieved in a vertical screen playback scene. When the terminal device is used for a landscape screen, for example, to play audio and video contents, the third sound output unit 330 and the fourth sound output unit 340 respectively play the left channel and the right channel, so that stereo sound playing can be realized in a landscape screen playing scene. When the four sound output units play the contents of different sound channels, four-channel stereo playing can be provided.

It should be noted that the terminal device in the embodiment of the present application may further include more sound output units and corresponding sound guide channels, for example, the terminal device includes 5, 6, 7, or more sound output units. A plurality of sound output units included in the terminal equipment can play different sound contents, so that the effect of playing stereo sound with more sound channels in horizontal and vertical screens is achieved. One of the multiple sound output units included in the terminal device can play a bass sound channel, so that a stereo playing effect of 3.1, 4.1, 5.1 and 6.1 sound channels is achieved.

Fig. 16 shows a schematic block diagram of a terminal device provided in an embodiment of the present application. As shown, the terminal device 600 includes a detection unit 610, a processing unit 620, a channel opening/closing control unit 630, and an output unit 640. The sound output unit 640 includes the first sound output unit 310, the second sound output unit 320, and the third sound output unit 330, and the first sound output unit 310 and the second sound output unit 320 have two sound guide channels, so that each sound guide channel of the first sound output unit 310 and the second sound output unit 320 corresponds to one channel opening and closing control unit 630. The third sound emitting unit 330 has a sound guiding channel, and the channel opening/closing control unit 630 is not provided.

The detection unit 610 is used to detect the use state information of the terminal device, such as the motion posture, the placement direction, the user operation, and the like of the terminal device. The detection unit 610 outputs the acquired use state information to the processing unit 620.

The detection unit 610 may be the gyro sensor 180B, the acceleration sensor 180E, and the like in the hardware frame diagram shown in fig. 1, and may detect attitude information such as a rotation direction, an inclination angle, and the like of the terminal device for determining whether the terminal device is in a landscape/portrait state, a placement direction, and the like. The detecting unit 610 may also be the pressure sensor 180A, the touch sensor 180K, or the like, and may detect a touch operation of the user, and further obtain that the user sets the terminal device to the earpiece mode/the external mode, or the like.

The processing unit 620, the input connection detection unit 610, the output connection sound output unit 630 and the channel opening and closing control unit 640. The processing unit 620 receives the terminal device usage state information input by the detection unit 610, and determines a user usage scene, a landscape/portrait screen state of the terminal device, a terminal device placement direction, and the like according to the terminal device usage state information. The usage scene includes a horizontal/vertical stereo play scene and a call scene. The further processing unit 620 determines the operation mode of the tone unit and the open/close state of the tone guide channel according to the acquired information.

The processing unit 620 may include a plurality of independent sub-processing units, for example, some sub-processing units may be used to output control signals, some sub-processing units may implement audio processing, may be capable of outputting audio signals, and so on. The processing unit 620 may also be an integrated unit with output control signals and audio signals.

The processing unit 620 may be the processor 110 described in fig. 1.

The channel opening/closing control unit 630 is configured to receive the first control signal from the processing unit 620, and correspondingly control an opening/closing state of the sound guiding channel corresponding to the sound output unit 640. The channel switching control unit 630 may be the control assembly 350 described above.

And the sound output unit 640 is configured to receive the second control signal of the processing unit 620, and operate or mute according to the second control signal. The sound output unit 640 can also receive the audio signal sent by the processing unit 620 and play the audio signal on the working sound output unit.

The following describes a method for realizing stereo output provided by the embodiments of the present application with reference to specific embodiments. Fig. 17 shows a hardware block diagram of audio correlation provided in an embodiment of the present application, and fig. 18 shows a software framework diagram of audio correlation provided in an embodiment of the present application.

Next, referring to fig. 17 and 18, the audio processing procedure of the terminal device will be briefly described.

Mainly running on the Application Processor (AP) in fig. 17 are application programs and an Operating System (OS) such as an android system (android). A Communication Processor (CP), also called a Baseband Processor (BP) or a modem (modem), mainly processes communication related processes, and mainly functions to support several communication standards, provide multimedia functions and related interfaces for multimedia displays, image sensors and audio devices. An Audio digital signal processor (Audio DSP) is a DSP that processes Audio. The AP, the CP and the audio DSP communicate with each other through inter-processor communication (IPC), and exchange control messages and audio data. The AP, CP and audio DSP, as well as other functional processors, may be integrated on a chip to form a System On Chip (SOC).

In addition, the terminal device further includes a hardware codec chip mainly used for audio acquisition and playing, which converts an analog signal of the audio into a digital signal (i.e., a/D conversion) when acquiring the audio, and then sends the digital signal to a Central Processing Unit (CPU) through an I2S bus (the codec chip may also be integrated with the CPU chip in one chip). When the audio is to be played, the CPU sends the audio digital signal to the codec chip through the I2S bus, and then converts the digital signal into an analog signal (i.e., D/a conversion) for playing. The codec may be controlled by the AP, for example, by receiving a second control signal from the AP to cause the codec to select a different audio path, etc., such as playing music, making a call, etc., that is different in the flow line inside the codec chip. The codec may also perform corresponding processing on the audio signal, such as volume control, Equalization (EQ) control, and the like. codec exchanges audio data with the audio DSP over the I2S bus.

Connected to the hardware codec are various peripheral devices, such as a Microphone (MIC), an earpiece (earpiece), a speaker (speaker), a wired earphone, etc. The following description will be given taking an example of an output unit other than the embodiments of the present application. Unlike the aforementioned peripherals, the Bluetooth (BT) headset includes audio capture and playback functions and is therefore directly connected to the audio DSP via the I2S bus. When the Bluetooth headset is used for listening to music, the audio code stream is decoded into PCM data on the AP and is directly sent to the Bluetooth headset for playing through the UART, but is not sent to the Bluetooth headset for playing through the I2S bus through the audio DSP.

A Power Amplifier (PA) is disposed between the codec and the sound output unit, and is used for performing power amplification on the audio signal, so as to amplify the volume of the sound output unit.

The peripheral of the terminal device in the embodiment of the application further comprises a channel opening and closing control unit, and the channel opening and closing control unit is used for controlling the opening and closing states of the sound guide channels of the sound output units. The channel opening and closing control unit can receive a first control signal of the AP, so that the opening or closing of the corresponding sound guide channel is controlled.

Fig. 18 mainly shows a software block diagram of an audio portion on the AP in fig. 17, where for example, the terminal device is an Android mobile phone, and an Android system runs on the AP. The Android audio software is divided into different layers, mainly comprising a kernel layer (kernel), a Hardware Abstraction Layer (HAL), a framework layer (framework), an application program layer and the like.

The top layer is an application layer that includes a series of audio-related applications such as music, telephony, sound recordings, video, games, and the like. The next layer is a framework layer (framework), and the framework layers of the audio part correspond to the application framework layer and the system runtime layer shown in fig. 1. The framework has a plurality of modules, which mainly include a media player (MediaPlayer), a media recorder (MediaRecorder), a track (AudioTrack), a recorder (AudioRecorder), an audio service (AudioService), an audio manager (AudioManager), and the like, and the specific functions of the modules are not the key points in the discussion of the application and are not described herein again. The frame may determine device/routing policies, volume control, mixing services, etc. The next layer is a hardware abstraction layer (HAL layer) which mainly includes a player (audioflexinger) and an audio policy service (AudioPolicyService) and is used for configuring equipment/access, volume and the like. The task of the HAL layer is to truly associate the AudioFlinger/AudioPolicyservice with the hardware device, but to ensure that the changes of the bottom layer do not affect the upper layer. In the kernel layer are audio drivers, including codec drivers, MIC drivers, etc., for example.

Taking the example that the user uses the terminal device to play audio in the vertical screen, the detection unit may report the posture information of the terminal device (for example, the terminal device is in the vertical screen state) to the AP in real time, and the frame obtains the posture information of the terminal device from the software layer. The user opens the application program of music playing, and the frame can determine that the user uses the scene to play the audio scene for the vertical screen. According to the determined using scene, the frame issues the corresponding strategy of the vertical screen playing audio (for example, the first sound output unit mentioned above plays the left sound channel, the second sound output unit plays the right sound channel, the third sound output unit does not work, the left sound guide channel is opened, the right sound guide channel is opened, and the first sound guide channel and the second sound guide channel are closed) to the HAL layer so as to drive the peripheral equipment. The HAL layer may send control messages to the peripherals involved in the play strategy. From the hardware point of view, that is, the AP sends a first control signal to the channel opening and closing control unit, so that the channel opening and closing control unit acts to control the corresponding sound guide channel to open or close. For example, the AP sends a first control signal to the channel opening and closing control unit corresponding to the left leading tone channel, so that the channel opening and closing control unit corresponding to the left leading tone channel moves to open the left leading tone channel, and the AP sends a first control signal to the channel opening and closing control unit corresponding to the first upper leading tone channel, so that the channel opening and closing control unit corresponding to the first upper leading tone channel moves to close the first leading tone channel. The AP sends a second control signal to the codec, so that the codec selects a corresponding audio path, for example, determines which sound emitting units are operated, which sound emitting units are not operated, and the like. Therefore, a path for playing audio through a vertical screen is established, digital audio signals can be exchanged between the AP and the audio DSP, the audio DSP performs resampling, audio mixing and other processing on the audio to be played, the digital audio signals are sent to the codec to be converted into analog audio signals, the analog audio signals are amplified through the power amplifier according to the determined audio path and then played on the corresponding sound output unit, for example, a left channel is played through the first sound output unit, and a right channel is played through the second sound output unit.

Taking the example that the user uses the terminal device to answer the call, after receiving the communication request of another terminal device, the CP can notify the AP of the information to be communicated, and the frame obtains the information to be communicated of the terminal device from the software level. The frame may determine that the user usage scenario is a call scenario, and initially defaults to a handset mode. According to the determined using scene, the frame issues the corresponding strategies (such as the first sound emitting unit and the second sound emitting unit mentioned above play the single sound channel, the third sound emitting unit does not work, the left sound guiding channel is closed, the right sound guiding channel is closed, and the first sound guiding channel and the second sound guiding channel are opened) of the conversation scene of the earphone mode to the HAL layer so as to drive the peripheral equipment. The HAL layer may send control messages to the peripherals involved in the play strategy. From the hardware point of view, that is, the AP sends a first control signal to the channel opening and closing control unit, so that the channel opening and closing control unit acts to control the corresponding sound guide channel to open or close. For example, the AP sends a first control signal to the channel opening and closing control unit corresponding to the left leading tone channel, so that the channel opening and closing control unit corresponding to the left leading tone channel moves to close the left leading tone channel, and the AP sends a first control signal to the channel opening and closing control unit corresponding to the first upper leading tone channel, so that the channel opening and closing control unit corresponding to the first upper leading tone channel moves to open the first leading tone channel. The AP sends a second control signal to the codec, so that the codec selects a corresponding audio path, for example, determines which sound emitting units are operated, which sound emitting units are not operated, and the like. This is equivalent to establishing an audio path for a call scenario, and the AP may inform the CP that the path is established. In the downlink direction, the terminal equipment receives the voice data sent by the other side from the air interface, performs network side processing, then sends the voice data to the audio DSP, decodes, performs post-processing, resamples and the like after the voice data is received by the audio DSP, and then sends the voice data to the codec chip. The codec converts the digital audio signal into an analog audio signal, amplifies the analog audio signal by the power amplifier according to the determined audio path, and plays the amplified analog audio signal on the corresponding sound output unit, for example, the first sound output unit and the second sound output unit play a monaural voice.

It should be understood that the terminal device is in other scenes that need the operation of the sound output unit, which is similar to the above process, and is not described in detail here.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

In the description of the present application, it should be noted that, unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; either directly or indirectly through intervening media, or may be interconnected between two elements. The above terms are specifically understood in the present application by those of ordinary skill in the art.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

49页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种扬声器模组和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类