Regional voice communication method, device, storage medium and electronic equipment

文档序号:427733 发布日期:2021-12-24 浏览:24次 中文

阅读说明:本技术 区域语音通信方法、装置、存储介质及电子设备 (Regional voice communication method, device, storage medium and electronic equipment ) 是由 李峰 林婷婷 陈金霞 于 2021-09-24 设计创作,主要内容包括:本公开涉及语音通信领域,具体涉及一种区域语音通信方法、区域语音通信装置、存储介质及电子设备。该区域语音通信方法包括:获取客户端发送的语音信息;基于所述客户端对应的受控虚拟角色在游戏场景中的位置信息,确定位于所述受控虚拟角色的收听区域内的目标虚拟角色;将所述语音信息转发至所述目标虚拟角色对应的目标客户端。本公开提供的区域语音通信方法能够解决将客户端的语音信息同步给收听区域内的其他部分客户端的问题。(The present disclosure relates to the field of voice communications, and in particular, to a regional voice communication method, a regional voice communication apparatus, a storage medium, and an electronic device. The regional voice communication method comprises the following steps: acquiring voice information sent by a client; determining a target virtual character located in a listening area of the controlled virtual character based on the position information of the controlled virtual character corresponding to the client in a game scene; and forwarding the voice information to a target client corresponding to the target virtual role. The regional voice communication method can solve the problem that the voice information of the client is synchronized to other part of clients in the listening region.)

1. A regional voice communication method, comprising:

acquiring voice information sent by a client;

determining a target virtual character located in a listening area of the controlled virtual character based on the position information of the controlled virtual character corresponding to the client in a game scene;

and forwarding the voice information to a target client corresponding to the target virtual role.

2. The method of claim 1, prior to the obtaining voice information sent by a client, the method further comprising:

responding to a voice communication request with a voice area identification sent by a client, and identifying a voice area corresponding to the voice area identification;

and establishing voice communication connection between the voice area and the client according to a preset mode so as to acquire voice information sent by the client based on the voice communication connection.

3. The regional voice communication method according to claim 1, wherein the determining a target virtual character located in a listening area of the controlled virtual character based on the position information of the controlled virtual character corresponding to the client in the game scene comprises:

calculating the listening area of the controlled virtual character based on position information of the controlled virtual character in a game scene, hearing area parameters of the controlled virtual character, and map information of the game scene;

and marking the virtual character in the listening area as the target virtual character.

4. The regional voice communication method according to claim 1, wherein before forwarding the voice information to the target client corresponding to the target virtual character, the method further comprises:

acquiring the listening role upper limit value of the target virtual role; and

calculating the number of pre-received controlled virtual roles corresponding to the voice information pre-received by the target client;

when the number of the pre-received controlled virtual roles is larger than the upper limit value of the listening role, judging whether the voice information needs to be forwarded or not;

and when the voice information needs to be forwarded, forwarding the voice information to the target client corresponding to the target virtual role.

5. The regional voice communication method of claim 4, wherein the determining whether the voice message needs to be forwarded comprises:

calculating a weight value of the pre-received controlled avatar relative to the target avatar; wherein the weight value comprises any one or a combination of volume weight, continuity weight and distance weight of the pre-receiving controlled virtual character relative to the target virtual character of the voice information;

sorting the pre-received controlled virtual roles based on the weight values, and selecting the pre-received controlled virtual roles with the same number as the upper limit value of the listening role as the controlled virtual roles to be received according to the sorting result;

and when the pre-received controlled virtual role is the controlled virtual role to be received, judging that the voice information needs to be forwarded.

6. The regional voice communication method according to claim 1, further comprising:

receiving a position synchronization request of the controlled virtual role sent by the client;

and updating the position information of the controlled virtual character in the game scene based on the position synchronization request.

7. The regional voice communication method according to claim 1, further comprising:

receiving a hearing area parameter of the pre-configured controlled virtual character sent by the client to determine the target virtual character based on the hearing area parameter.

8. The regional voice communication method according to claim 2, wherein before the voice communication request with the voice zone identifier sent by the client, the method further comprises:

responding to a login request which is sent by the client and based on the controlled virtual role, and identifying a voice scene corresponding to the controlled virtual role;

and returning the voice area identifier corresponding to the voice scene to the client so that the client generates the voice communication request based on the voice area identifier.

9. The regional voice communication method according to claim 2, wherein before the voice communication request with the voice zone identifier sent by the client, the method further comprises:

responding to a login request which is sent by the client through a game server agent and is based on the controlled virtual character, and identifying a voice scene corresponding to the controlled virtual character;

and returning the voice area identifier corresponding to the voice scene to the game server so that the game server is synchronized to the client and the client generates the voice communication request based on the voice area identifier.

10. The regional voice communication method according to claim 8 or 9, wherein the returning the voice zone identifier corresponding to the voice scene comprises:

querying whether a target scene matched with the voice scene exists in a voice scene instance database;

when the target scene exists, configuring the voice area identifier corresponding to the target scene as the voice area identifier corresponding to the voice scene for returning;

and when the target scene does not exist, allocating a voice area for the voice scene, and configuring a voice area identifier corresponding to the allocated voice area as a voice area identifier corresponding to the voice scene for returning.

11. The method according to claim 10, wherein when a plurality of the speech regions are included, the allocating a speech region for the speech scene comprises:

acquiring load information of each voice area;

allocating the voice scene to the voice region with the lightest load based on the load information.

12. A regional voice communication method, comprising:

sending first voice information to a voice server, so that the voice server determines a first target virtual character located in a listening area of a controlled virtual character based on position information of the controlled virtual character in a game scene, and forwards the first voice information to a first client corresponding to the first target virtual character; and

receiving second voice information of a second client corresponding to a second target virtual role forwarded by the voice server; wherein the second target avatar is determined based on location information of the controlled avatar of the second client in a game scene.

13. The regional voice communication method of claim 12, further comprising:

sending a voice communication request with a voice area identifier to the voice server so that the voice server establishes voice communication connection between a voice area corresponding to the voice area identifier and the client in a preset mode;

and sending the first voice information and receiving the second voice information by utilizing the voice communication request.

14. The regional voice communication method of claim 12, further comprising:

generating a location synchronization request based on the location information of the controlled virtual character;

and sending the position synchronization request to the voice server so that the voice server updates the position information of the controlled virtual character in a game scene, and further determining the first target virtual character according to the position information.

15. The regional voice communication method of claim 12, further comprising:

configuring hearing zone parameters of the controlled virtual character;

sending the hearing area parameters to the voice server to cause the voice server to determine the first target avatar based on the hearing area parameters.

16. The regional voice communication method of claim 12, further comprising:

configuring a listening role upper limit value of the controlled virtual role;

and sending the listening role upper limit value to the voice server so that the voice server judges whether second voice information of a second client corresponding to the second target virtual role needs to be received or not according to the listening role upper limit value.

17. The regional voice communication method according to claim 13, wherein before sending the voice communication request with the voice zone identification to the voice server, the method further comprises:

sending a login request based on the controlled virtual role to the voice server to acquire a voice area identifier of a voice scene corresponding to the controlled virtual role;

generating the voice communication request based on the voice zone identification.

18. The regional voice communication method according to claim 13, wherein before sending the voice communication request with the voice zone identification to the voice server, the method further comprises:

sending a login request based on the controlled virtual character to a game server to acquire a voice zone identifier of a voice scene corresponding to the controlled virtual character returned by the game server through the voice server;

generating the voice communication request based on the voice zone identification.

19. A regional voice communication apparatus, comprising:

the acquisition module is used for acquiring voice information sent by a client;

the computing module is used for determining a target virtual character in a listening area of the controlled virtual character based on the position information of the controlled virtual character corresponding to the client in a game scene;

and the forwarding module is used for forwarding the voice information to a target client corresponding to the target virtual role.

20. A regional voice communication apparatus, comprising:

the system comprises a sending module, a receiving module and a sending module, wherein the sending module is used for sending first voice information to a voice server so that the voice server can determine a first target virtual character located in a listening area of a controlled virtual character based on position information of the controlled virtual character in a game scene, and forwards the first voice information to a first client side corresponding to the first target virtual character; and

the receiving module is used for receiving second voice information of a second client corresponding to a second target virtual role forwarded by the voice server; wherein the second target avatar is determined based on location information of the controlled avatar of the second client in a game scene.

21. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the regional voice communication method according to any one of claims 1 to 18.

22. An electronic device, comprising:

one or more processors;

storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the regional voice communication method of any of claims 1 to 18.

Technical Field

The present disclosure relates to the field of voice communications, and in particular, to a regional voice communication method, a regional voice communication apparatus, a storage medium, and an electronic device.

Background

The current common voice service is real-time voice, namely, a group of multiple persons can be heard by persons in the group, and the method is suitable for use scenes with fewer persons speaking at the same time, such as team voice, or radio station voice which is heard by a few persons who speak and a plurality of persons, and the like.

In special cases, however, the user is only interested in sounds in his vicinity, which is often only a small part of the entire scene, relative to the entire scene. Therefore, there is a need for a new service that provides scene speech, i.e., a group of multiple people, but only those nearby can hear it. The real-time voice solution is obviously not suitable for scene voice.

It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.

Disclosure of Invention

The present disclosure is directed to providing a regional voice communication method, a regional voice communication apparatus, a storage medium, and an electronic device, and aims to solve the problem of synchronizing voice information of a client to other clients in a listening area.

Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.

According to an aspect of the embodiments of the present disclosure, there is provided a regional voice communication method, including: acquiring voice information sent by a client; determining a target virtual character located in a listening area of the controlled virtual character based on the position information of the controlled virtual character corresponding to the client in a game scene; and forwarding the voice information to a target client corresponding to the target virtual role.

According to some embodiments of the present disclosure, based on the foregoing solution, before the acquiring the voice information sent by the client, the method further includes: responding to a voice communication request with a voice area identification sent by a client, and identifying a voice area corresponding to the voice area identification; and establishing voice communication connection between the voice area and the client according to a preset mode so as to acquire voice information sent by the client based on the voice communication connection.

According to some embodiments of the present disclosure, based on the foregoing solution, the determining, based on the location information of the controlled virtual character corresponding to the client in the game scene, a target virtual character located in a listening area of the controlled virtual character includes: calculating the listening area of the controlled virtual character based on position information of the controlled virtual character in a game scene, hearing area parameters of the controlled virtual character, and map information of the game scene; and marking the virtual character in the listening area as the target virtual character.

According to some embodiments of the present disclosure, based on the foregoing scheme, before forwarding the voice information to the target client corresponding to the target virtual role, the method further includes: acquiring the listening role upper limit value of the target virtual role; calculating the number of pre-received controlled virtual roles corresponding to the voice information pre-received by the target client; when the number of the pre-received controlled virtual roles is larger than the upper limit value of the listening role, judging whether the voice information needs to be forwarded or not; and when the voice information needs to be forwarded, forwarding the voice information to the target client corresponding to the target virtual role.

According to some embodiments of the present disclosure, based on the foregoing scheme, the determining whether the voice information needs to be forwarded includes: calculating a weight value of the pre-received controlled avatar relative to the target avatar; wherein the weight value comprises any one or a combination of volume weight, continuity weight and distance weight of the pre-receiving controlled virtual character relative to the target virtual character of the voice information; sorting the pre-received controlled virtual roles based on the weight values, and selecting the pre-received controlled virtual roles with the same number as the upper limit value of the listening role as the controlled virtual roles to be received according to the sorting result; and when the pre-received controlled virtual role is the controlled virtual role to be received, judging that the voice information needs to be forwarded.

According to some embodiments of the present disclosure, based on the foregoing solution, the method further comprises: receiving a position synchronization request of the controlled virtual role sent by the client; and updating the position information of the controlled virtual character in the game scene based on the position synchronization request.

According to some embodiments of the present disclosure, based on the foregoing solution, the method further comprises: receiving a hearing area parameter of the pre-configured controlled virtual character sent by the client to determine the target virtual character based on the hearing area parameter.

According to some embodiments of the present disclosure, based on the foregoing scheme, before the voice communication request responding to the voice zone identifier sent by the client, the method further includes: responding to a login request which is sent by the client and based on the controlled virtual role, and identifying a voice scene corresponding to the controlled virtual role; and returning the voice area identifier corresponding to the voice scene to the client so that the client generates the voice communication request based on the voice area identifier.

According to some embodiments of the present disclosure, based on the foregoing scheme, before the voice communication request responding to the voice zone identifier sent by the client, the method further includes: responding to a login request which is sent by the client through a game server agent and is based on the controlled virtual character, and identifying a voice scene corresponding to the controlled virtual character; and returning the voice area identifier corresponding to the voice scene to the game server so that the game server is synchronized to the client and the client generates the voice communication request based on the voice area identifier.

According to some embodiments of the present disclosure, based on the foregoing solution, the method further comprises: the returning of the voice area identifier corresponding to the voice scene includes: querying whether a target scene matched with the voice scene exists in a voice scene instance database; when the target scene exists, configuring the voice area identifier corresponding to the target scene as the voice area identifier corresponding to the voice scene for returning; and when the target scene does not exist, allocating a voice area for the voice scene, and configuring a voice area identifier corresponding to the allocated voice area as a voice area identifier corresponding to the voice scene for returning.

According to some embodiments of the present disclosure, based on the foregoing scheme, when a plurality of the voice zones are included, the allocating a voice zone for the voice scene includes: acquiring load information of each voice area; allocating the voice scene to the voice region with the lightest load based on the load information.

According to a second aspect of the embodiments of the present disclosure, there is provided a regional voice communication method, including: sending first voice information to a voice server, so that the voice server determines a first target virtual character located in a listening area of a controlled virtual character based on position information of the controlled virtual character in a game scene, and forwards the first voice information to a first client corresponding to the first target virtual character; receiving second voice information of a second client corresponding to a second target virtual role forwarded by the voice server; wherein the second target avatar is determined based on location information of the controlled avatar of the second client in a game scene.

According to some embodiments of the present disclosure, based on the foregoing solution, the method further comprises: sending a voice communication request with a voice area identifier to the voice server so that the voice server establishes voice communication connection between a voice area corresponding to the voice area identifier and the client in a preset mode; and sending the first voice information and receiving the second voice information by utilizing the voice communication request.

According to some embodiments of the present disclosure, based on the foregoing solution, the method further comprises: generating a location synchronization request based on the location information of the controlled virtual character; and sending the position synchronization request to the voice server so that the voice server updates the position information of the controlled virtual character in a game scene, and further determining the first target virtual character according to the position information.

According to some embodiments of the present disclosure, based on the foregoing solution, the method further comprises: configuring hearing zone parameters of the controlled virtual character; sending the hearing area parameters to the voice server to cause the voice server to determine the first target avatar based on the hearing area parameters.

According to some embodiments of the present disclosure, based on the foregoing solution, the method further comprises: configuring a listening role upper limit value of the controlled virtual role; and sending the listening role upper limit value to the voice server so that the voice server judges whether second voice information of a second client corresponding to the second target virtual role needs to be received or not according to the listening role upper limit value.

According to some embodiments of the present disclosure, based on the foregoing scheme, before sending the voice communication request with the voice zone identification to the voice server, the method further includes: sending a login request based on the controlled virtual role to the voice server to acquire a voice area identifier of a voice scene corresponding to the controlled virtual role; generating the voice communication request based on the voice zone identification.

According to some embodiments of the present disclosure, based on the foregoing scheme, before sending the voice communication request with the voice zone identification to the voice server, the method further includes: sending a login request based on the controlled virtual character to a game server to acquire a voice zone identifier of a voice scene corresponding to the controlled virtual character returned by the game server through the voice server; generating the voice communication request based on the voice zone identification.

According to a third aspect of the embodiments of the present disclosure, there is provided a regional voice communication apparatus, including: the acquisition module is used for acquiring voice information sent by a client; the computing module is used for determining a target virtual character in a listening area of the controlled virtual character based on the position information of the controlled virtual character corresponding to the client in a game scene; and the forwarding module is used for forwarding the voice information to a target client corresponding to the target virtual role.

According to a fourth aspect of the embodiments of the present disclosure, there is provided a regional voice communication apparatus, including: the system comprises a sending module, a receiving module and a sending module, wherein the sending module is used for sending first voice information to a voice server so that the voice server can determine a first target virtual character located in a listening area of a controlled virtual character based on position information of the controlled virtual character in a game scene, and forwards the first voice information to a first client side corresponding to the first target virtual character; the receiving module is used for receiving second voice information of a second client corresponding to a second target virtual role forwarded by the voice server; wherein the second target avatar is determined based on location information of the controlled avatar of the second client in a game scene.

According to a fifth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the regional voice communication method as in the above embodiments.

According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the regional voice communication method as in the above embodiments.

Exemplary embodiments of the present disclosure may have some or all of the following benefits:

in the technical solutions provided in some embodiments of the present disclosure, after the voice information sent by the client is acquired, a target virtual character in a controlled virtual character listening area is determined according to the position information of the controlled virtual character corresponding to the client, and then the voice information is forwarded to a target client corresponding to the determined target virtual character. Based on the regional voice communication method disclosed by the invention, the voice server only needs to forward the voice information of the client to the target clients corresponding to other target virtual roles in the controlled virtual role listening area, and correspondingly, the client only can receive the voice information of the client corresponding to the target virtual roles in the controlled virtual role listening area; meanwhile, compared with the method that the voice server forwards the voice information according to the team formation information of the virtual role, another solution scheme that the voice information is forwarded to the nearby role according to the position information of the virtual role is provided, and the game experience is close to the experience in real life.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:

FIG. 1 schematically illustrates a diagram of a virtual character listening area in an exemplary embodiment of the present disclosure;

FIG. 2 schematically illustrates a flow chart of a regional voice communication method in an exemplary embodiment of the present disclosure;

FIG. 3 schematically illustrates a flow chart of a regional voice communication method in an exemplary embodiment of the present disclosure;

FIG. 4 is a data interaction diagram schematically illustrating a regional voice communication method in an exemplary embodiment of the present disclosure;

fig. 5 schematically illustrates a composition diagram of a regional voice communication apparatus in an exemplary embodiment of the present disclosure;

fig. 6 schematically illustrates a composition diagram of a regional voice communication apparatus in an exemplary embodiment of the present disclosure;

FIG. 7 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure;

fig. 8 schematically shows a structural diagram of a computer system of an electronic device in an exemplary embodiment of the disclosure.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.

Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.

The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.

The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.

In some "big flee and kill" games, when a user operates a virtual character corresponding to a client, a voice service is usually enabled through a microphone, and in contrast to the whole scene, the user is only interested in operating the sound of the user near the virtual character, not the sound of all the users, which is often only a small part of the sound of the whole scene.

Therefore, in order to better provide voice service for games, the concept Of AOI (Area Of Interest) Of hearing, namely the listening Area Of the virtual character, is introduced, and voice messages only need to be synchronized to the virtual character which can "hear" the voice.

Fig. 1 schematically illustrates a schematic diagram of a virtual character listening area in an exemplary embodiment of the present disclosure. As shown in fig. 1, a virtual scene is divided into a plurality of cell grids, and virtual characters A, B, C, X, Y, Z are distributed on the grids, where 101 is an AOI region of a virtual character a, 102 is an AOI region of a virtual character X, and 103 is an AOI region of a virtual character Z.

For virtual character a, character a can only hear the sounds of characters B and C located within the 101 region. For the virtual character X, the character X can only hear the sound of the character Y located within the area 102. For the virtual character Z, no other character exists in the area 103, so the character Z does not hear any of his voice.

Based on the concept, the present disclosure provides a regional voice communication method, which is used to forward the voice information of the client to the client corresponding to part of the virtual characters in the listening region, so that the voice information of the client can be heard only by the client corresponding to the virtual character near the controlled virtual character corresponding to the client, thereby reducing the voice traffic and reducing the voice broadcast pressure of the server.

Implementation details of the technical solution of the embodiments of the present disclosure are set forth in detail below.

Fig. 2 schematically illustrates a flow chart of a regional voice communication method in an exemplary embodiment of the present disclosure. As shown in fig. 2, the regional voice communication method includes steps S21 to S23:

step S21, acquiring voice information sent by the client;

step S22, determining a target virtual character in the listening area of the controlled virtual character based on the position information of the controlled virtual character corresponding to the client in the game scene;

step S23, forwarding the voice information to the target client corresponding to the target virtual role.

In the technical solutions provided in some embodiments of the present disclosure, after the voice information sent by the client is acquired, a target virtual character in a controlled virtual character listening area is determined according to the position information of the controlled virtual character corresponding to the client, and then the voice information is forwarded to a target client corresponding to the determined target virtual character. Based on the regional voice communication method disclosed by the invention, the voice server only needs to forward the voice information of the client to the target clients corresponding to other target virtual roles in the controlled virtual role listening area, and correspondingly, the client only can receive the voice information of the client corresponding to the target virtual roles in the controlled virtual role listening area; meanwhile, compared with the method that the voice server forwards the voice information according to the team formation information of the virtual role, another solution scheme that the voice information is forwarded to the nearby role according to the position information of the virtual role is provided, and the game experience is close to the experience in real life.

Hereinafter, each step of the regional voice communication method in the present exemplary embodiment will be described in more detail with reference to the drawings and the examples.

In one embodiment of the present disclosure, the regional voice communication method of the above-described step S21 to step S23 is applied to a voice server.

In step S21, the voice information transmitted by the client is acquired.

Specifically, the voice server undertakes the process of receiving and forwarding the voice information of each virtual character in the game, so the voice server needs to receive the voice information sent by each client. The voice information sent by the client may be real-time voice information sent by a user operating the client through a microphone.

In step S22, a target virtual character located within the listening area of the controlled virtual character is determined based on the position information of the controlled virtual character corresponding to the client in the game scene.

In one embodiment of the present disclosure, a user operates a virtual character through a client, and the virtual character is in a game scene. For convenience of explanation, the controlled virtual role refers to a virtual role corresponding to the client side sending the voice information, and the target virtual role refers to a virtual role corresponding to the client side receiving the voice information. Under different situations, the controlled virtual role and the target virtual role can be changed, and the virtual role corresponding to the same client can be the controlled virtual role or the target virtual role.

Referring to the virtual scene shown in fig. 1, virtual characters A, B, C, X, Y, Z included in the virtual scene may be controlled by different users through clients A, B, C, X, Y, Z.

A game scene is a virtual scene displayed when an application program runs on a terminal or a server. Optionally, the virtual scene is a simulated environment of the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment, and the virtual environment may be the sky, the land, the sea, and the like, where the land includes environment elements such as a desert, a city, and the like. The virtual scene is any one of a two-dimensional virtual scene and a three-dimensional virtual scene. For example, in a sandbox type 3D shooting game, the virtual scene is a 3D game world for a player to control a virtual object to play against, and an exemplary virtual scene may include: at least one element selected from a group consisting of a mountain, a flat ground, a river, a lake, an ocean, a desert, a sky, a plant, a building, and a vehicle; for example, for a 2D card game in which a virtual scene is a scene for displaying a released card or a virtual object corresponding to a card, an example virtual scene may include: a arena, a battle field, or other 'field' elements or other elements capable of displaying the card battle state; for a 2D or 3D multiplayer online tactical sports game, the virtual scene is a 2D or 3D terrain scene for the virtual object to fight against, and exemplary virtual scenes may include: mountains, lines, rivers, classrooms, tables and chairs, podium and other elements in the canyon style.

A virtual object refers to a dynamic object that can be controlled in a virtual scene. Alternatively, the dynamic object may be a virtual character, a virtual animal, an animation character, or the like. The virtual object is a Character controlled by a Player through an input device, or an Artificial Intelligence (AI) set in a virtual environment match-up through training, or a Non-Player Character (NPC) set in a virtual scene match-up. Optionally, the virtual object is a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects in the virtual scene match is preset, or dynamically determined according to the number of clients participating in the match, which is not limited in the embodiment of the present application. In one possible implementation, the user can control the virtual object to move in the virtual scene, e.g., control the virtual object to run, jump, crawl, etc., and can also control the virtual object to fight against other virtual objects using skills, virtual props, etc., provided by the application.

Further, in an embodiment of the present disclosure, before acquiring the voice information sent by the client, a voice communication connection needs to be established as a communication basis. Specifically, step S20: the specific steps for establishing the voice communication connection are as follows:

step S201, responding to a voice communication request with a voice area identification sent by a client, and identifying a voice area corresponding to the voice area identification;

step S202, establishing voice communication connection between the voice area and the client according to a preset mode, so as to obtain voice information sent by the client based on the voice communication connection.

Specifically, the voice server is provided with an AudioZone (voice zone) for managing access of virtual roles in a voice scene and functions of forwarding voice information, and the like. One voice area corresponds to one voice scene.

The voice scene is composed of virtual characters in a plurality of game scenes, and the virtual characters perform voice communication based on the principle of scene voice, namely only receive the sound of the nearby virtual characters and only forward the sound of the virtual characters to the nearby virtual characters. Such as a player avatar in a particular game scene, or a player avatar in the game scene in the same live time period, etc. Virtual characters in the same virtual scene may be in different voice scenes.

In step S201, after receiving the voice communication request, it is first required to identify a voice zone identifier corresponding to the voice communication request, where the voice zone identifier specifically refers to an AudioZone address in the voice server.

And then, executing step S202, the voice server identifying, according to the AudioZone address, an AudioZone voice zone to be connected with the client, and then, the voice server establishing a voice communication connection between the client and the voice zone corresponding to the AudioZone address, so as to access the virtual role to a voice scene corresponding to the voice zone, so as to implement regional voice communication. The preset mode may be a voice SDK.

After the voice communication connection between the voice area of the voice server and the client is established, the target virtual character capable of receiving the voice information can be determined.

Further, in step S22, the determining, based on the location information of the controlled virtual character corresponding to the client in the game scene, a target virtual character located in the listening area of the controlled virtual character includes:

step S221, calculating the listening area of the controlled virtual character based on the position information of the controlled virtual character in a game scene, the hearing area parameter of the controlled virtual character and the map information of the game scene;

step S222, mark the virtual character in the listening area as the target virtual character.

Specifically, in step S221, the listening area of the controlled virtual character is first calculated. And calculating the position information of the controlled virtual character, the hearing area parameters of the controlled virtual character and the map information of the game scene.

The position information of the controlled virtual character in the game scene specifically refers to coordinate values of the virtual character in the virtual scene. For example, the virtual scene is a 3D scene and is expressed by three-dimensional coordinates, and the virtual scene is a 2D scene and is expressed by two-dimensional coordinates.

Further, in one embodiment of the present disclosure, the location information of the controlled virtual character may be updated by the client sending to the voice server in real time after establishing the voice communication connection. Accordingly, the method further comprises: receiving a position synchronization request of the controlled virtual role sent by the client; and updating the position information of the controlled virtual character in the game scene based on the position synchronization request.

After the client establishes voice communication connection with the voice server, the client can report the position information of the virtual roles to the voice server at regular time, and the voice server synchronously updates the position information of each virtual role according to the position synchronization request.

In addition, the hearing region parameter of the virtual character refers to a parameter that can characterize the hearing region of the virtual character. For example, as shown in fig. 1, the hearing region parameters may include a grid size and a hearing side length, for example, if the parameter corresponding to the 101 region of the virtual character a is 0.5 × 0.5 grid, and the hearing side length is 5, the 101 region is a 5 × 5 square region centered on the grid of the virtual character a; of course, the hearing area parameter may be a hearing radius, for example, a circular area having a hearing radius as a radius around the coordinate where the virtual character a is located. The hearing area parameters can be set in a self-defining mode according to requirements. The parameters of the hearing area of different virtual characters can be the same or different, and can be set according to specific scenes and conditions.

Further, in one embodiment of the present disclosure, the hearing zone parameters of the virtual character may be transmitted by the client after establishing the voice communication connection. Thus, the method further comprises: receiving a hearing area parameter of the pre-configured controlled virtual character sent by the client to determine the target virtual character based on the hearing area parameter.

Specifically, the hearing area parameters may be provided by the client, different clients have different corresponding virtual roles, and the hearing area parameters may be set by user according to requirements. The hearing area parameters may also be carried to the voice communication request of the client, that is, the hearing area parameters are extracted from the voice communication request sent by the client, and then the listening area is calculated and the target virtual character in the listening area is determined based on the hearing area parameters.

It should be noted that the description of the hearing region parameters in this disclosure is an exemplary explanation only and is not limiting of the disclosure. And the listening area determined based on the hearing area parameters is not limited to a two-dimensional scene but may be a three-dimensional scene.

Meanwhile, map information of a virtual scene where the virtual character is located, that is, environment content of the virtual scene, needs to be acquired. The map information may be transmitted by the client or acquired from the game server.

Therefore, after obtaining the position information of the controlled virtual character, the hearing area parameters and the map information of the game scene,

and marking the controlled virtual character in the virtual scene map according to the position information of the controlled virtual character, and defining the hearing range of the controlled virtual character through the hearing area parameters, and finally defining the listening area of the controlled virtual character in the virtual scene map.

In step S222, the virtual character in the listening area is marked as the target virtual character.

Specifically, after the listening area is determined, all virtual characters located within the listening area in the virtual scene map may be marked as target virtual characters, that is, virtual characters in the vicinity where the "sound" of the controlled character can be heard.

In other embodiments of the present disclosure, there may be a portion of the virtual characters that have access to the voice scene, and a portion that has no access to the voice scene, so the target virtual character may also be marked according to the voice scene in which the controlled virtual character is located.

Specifically, a scenario instance is stored in a Redis cluster in the voice server, each AudioZone voice zone manages one voice scenario, and a corresponding voice scenario instance includes all virtual roles in the voice scenario. And after the listening area is calculated, selecting the virtual character to be marked as a target virtual character according to the virtual character information in the voice scene.

In step S23, the voice information is forwarded to the target client corresponding to the target virtual role.

After the target virtual roles are determined according to step S22, the AudioZone voice zone in the voice server responsible for the client forwards the voice information of the client to the target clients corresponding to the target virtual roles.

Therefore, for each client, the voice server needs to determine the target virtual character of the controlled virtual character listening area corresponding to the client, and then performs voice forwarding to the target client corresponding to the target virtual character.

In an embodiment of the present disclosure, before forwarding the voice information to the target client corresponding to the target virtual role, the method further includes:

step S241, obtaining the listening role upper limit value of the target virtual role; and

step S242, calculating the number of pre-received controlled virtual roles corresponding to the voice information pre-received by the target client;

step S243, when the number of the virtual characters to be controlled is larger than the upper limit value of the listening character, judging whether the voice information needs to be forwarded;

step S244, when the voice information needs to be forwarded, forwarding the voice information to the target client corresponding to the target virtual role.

Specifically, in order to improve the good voice interaction experience of the user in the game process and avoid the user from hearing too much voice information, the upper limit of the virtual characters that the user can hear needs to be limited, that is, one virtual character can "hear" the sound of a limited number of other virtual characters, that is, a screening link needs to be added when the client receives the voice information to ensure that the received voice does not exceed the load. Meanwhile, in order to save resource waste caused by voice forwarding, the target client can be filtered and forwarded again, and if the filtering is not passed, the forwarding is not performed.

In step S241, the listable character upper limit value "maxuser", that is, the value of the virtual character voice information that can be listened to at most in the virtual character listening area, may be obtained from the scene information of the voice scene corresponding to the AudioZone that manages the virtual character.

It should be noted that the sounds of a limited number of other avatars are directed to the target client receiving the voice information. The controlled avatar is to emit real-time voice information, and the target avatar is to receive real-time voice information. It is also necessary to calculate the number of virtual characters corresponding to the voice information received by the target client.

In step S242, the number of pre-received controlled virtual characters corresponding to the voice information pre-received by the target client is calculated. The voice server is responsible for managing the receiving and forwarding of the voice information, so that the voice information of how many controlled virtual roles needs to be forwarded to the target client side can be obtained, and the number of the pre-received controlled virtual roles can be obtained.

In step S243, when the number of pre-received controlled virtual characters is greater than the upper limit maxuser of the receivable characters, it needs to be determined whether the voice message needs to be forwarded.

Further, the determining whether the voice message needs to be forwarded includes:

step S2431 of calculating a weight value of the pre-reception controlled virtual character with respect to the target virtual character; wherein the weight value comprises any one or a combination of volume weight, continuity weight and distance weight of the pre-receiving controlled virtual character relative to the target virtual character of the voice information;

step S2432, sorting the pre-received controlled virtual roles based on the weight values, and selecting the pre-received controlled virtual roles with the same number as the upper limit value of the listening role as the controlled virtual roles to be received according to the sorting result;

step S2433, when the pre-received controlled virtual character is the controlled virtual character to be received, determining that the voice message needs to be forwarded.

In step S2431, the parameters such as the volume and continuity of the speaking, and the distance between the speaker and the listener can be combined to determine, that is, the weight value of each pre-received controlled avatar uttering voice for the target avatar is calculated. For example, the volume weight of the voice information is increased, that is, the higher the volume is, the larger the volume weight is; or the continuous weight of the voice information, the stronger the voice continuity is, the larger the weight is, if the voice is intermittent, the longer the interval time is, the smaller the corresponding weight is; or the distance weight between the controlled virtual character and the target virtual character is larger when the distance between the controlled virtual character and the target virtual character is shorter, and the distance between the controlled virtual character and the target virtual character is longer and smaller.

It should be noted that the weight value of the pre-received controlled avatar may be a combination of one or more of a volume weight, a continuity weight, and a distance weight. In the combination process, the weight values can be added, or different weight parameters can be distributed for weighting calculation. In addition, other influencing factors in voice interaction, such as the level of a virtual character, etc., may also be considered when designing the weight value, which is not specifically limited in this disclosure.

In step S2432, the weight values of the pre-received controlled virtual characters are sorted from large to small, and virtual characters with the maximum receivable character upper limit value maxuser are selected from the largest weight value as objects for finally receiving the voice information, that is, the controlled virtual characters should be received.

In step S2433, it is determined that the voice message needs to be forwarded based on the virtual character to be received. If the pre-receiving controlled virtual character is the controlled virtual character which should be received, then forwarding is needed, if the pre-receiving controlled virtual character is not the controlled virtual character which should be received of the determined target client, then the corresponding voice information does not need to be forwarded.

Referring to fig. 1, if avatar a is the target avatar to receive voice information, voice a of both avatars B and C is audible. If maxuser of the scene is 2, the voice server needs to forward the voice information of both B and C to a; if maxuser of the scene is 1, it is necessary to calculate B, C respective weight values according to B, C voice volume, speaking continuity and their distance from a, and then select a virtual character with the highest weight value from B, C for forwarding his voice information.

In one embodiment of the present disclosure, before the responding to the voice communication request with voice zone identification sent by the client, the method further includes: responding to a login request which is sent by the client and based on the controlled virtual role, and identifying a voice scene corresponding to the controlled virtual role; and returning the voice area identifier corresponding to the voice scene to the client so that the client generates the voice communication request based on the voice area identifier.

Specifically, the voice communication request sent by the client should include a voice area identifier to establish a communication connection with the voice area of the voice server, and the voice area identifier is allocated by the voice server, so that before the client establishes a voice communication connection with the voice server, the client needs to acquire the voice area identifier first, and then generate the voice communication request according to the voice area identifier.

When a user operates a client to enable a virtual character to enter a game scene, the client sends a login request of the controlled virtual character to a voice server. The login request may include a scene identifier of the voice scene in which the controlled virtual character is located, such as a scene name or a scene ID.

And the voice server extracts the scene identification in the login request according to the login request of the controlled virtual role and identifies the voice scene corresponding to the scene identification. Meanwhile, the voice server is provided with an AudioZone voice area for managing voice scenes, so that a voice area identifier corresponding to the voice scene, namely an AudioZone address, is determined according to the recognized voice scene, and the voice area identifier is returned to the client.

In one embodiment of the present disclosure, before the responding to the voice communication request with voice zone identification sent by the client, the method further includes: responding to a login request which is sent by the client through a game server agent and is based on the controlled virtual character, and identifying a voice scene corresponding to the controlled virtual character; and returning the voice area identifier corresponding to the voice scene to the game server so that the game server is synchronized to the client and the client generates the voice communication request based on the voice area identifier.

In the actual game running process, the request sent by the client is not necessarily trusted by the voice server in consideration of the problem of information security, so that the process of sending the request and receiving the result can be completed by adopting the game server agent.

Specifically, when a client performs a login request of a controlled virtual character, the client sends the login request to a game server, the game server completes information verification on the client and forwards the login request to a voice server in an agent mode, namely, the game server sends an http request to the voice server, the voice server returns a voice area identifier to the game server after responding to the login request, and the game server synchronizes the voice area identifier to the client.

Further, the returning of the voice region identifier corresponding to the voice scene includes: querying whether a target scene matched with the voice scene exists in a voice scene instance database; when the target scene exists, configuring the voice area identifier corresponding to the target scene as the voice area identifier corresponding to the voice scene for returning; and when the target scene does not exist, allocating a voice area for the voice scene, and configuring a voice area identifier corresponding to the allocated voice area as a voice area identifier corresponding to the voice scene for returning.

Specifically, a storage layer may be provided in the voice server for storing the created voice scene in a voice scene instance database. The voice scene instance database comprises related information of the voice scene, such as a name "zonename" of the voice scene, which needs to be unique and can be represented by a character string; the voice scene type 'zonetype' can be customized; the upper limit value of the auditable character is 'maxuser', namely the value of the virtual character voice information which can be listened at most in the virtual character listening area; the map information "maprange" is in a format of "0, 6400", and consists of a map minimum coordinate (x, y) and a map maximum coordinate (x, y), namely a virtual scene where the voice scene is located; the client list is clients, namely the client list corresponding to the virtual role in the voice scene accessed by the join interface; the server area "region", i.e., the area where the game server is located, is particularly important for overseas deployed servers.

After receiving a login request of a controlled virtual role, a voice service area firstly queries whether the voice scene exists in a voice scene instance database Redis cluster of a storage layer. If the virtual role exists, namely a voice scene is created in advance according to the request, the virtual role is directly accessed into the voice scene, and a voice zone identifier, namely an Audio zone address, is returned; and if the audio scene does not exist, acquiring the information of the audio scene to create the audio scene, allocating an AudioZone to the audio scene, and finally returning the allocated address of the AudioZone.

In an embodiment of the present disclosure, when a plurality of the voice zones are included, the allocating a voice zone for the voice scene includes: acquiring load information of each voice area; allocating the voice scene to the voice region with the lightest load based on the load information.

Specifically, the voice server may include a plurality of AudioZone voice zones, and in order to balance load pressures of the audiozones of the voice server, a ZoneDir (voice zone directory) of a load balancing layer may be set in the voice server for monitoring, allocating, and controlling AudioZone load balancing.

Therefore, when a target scene matched with the voice scene does not exist in the voice scene instance database, the ZoneDir creates a scene in the AudioZone with the lightest load for the voice scene where the virtual role is located, acquires scene information of the voice scene, stores the scene information in the voice scene instance database, and simultaneously creates a mapping relationship between the voice scene and the AudioZone.

Based on the method, the virtual roles which can be listened to in the AOI area of the virtual roles are secondarily screened, and proper partial sound is filtered out and then is synchronized to the client corresponding to the virtual roles, so that the voice quality is improved, and the voice interaction experience of users is further improved.

Fig. 3 schematically illustrates a flow chart of a regional voice communication method in an exemplary embodiment of the present disclosure. As shown in fig. 3, the regional voice communication method includes steps S31 and S32, and is applied to the client, specifically as follows:

step S31, sending first voice information to a voice server, so that the voice server determines a first target virtual character located in a listening area of a controlled virtual character based on position information of the controlled virtual character in a game scene, and forwards the first voice information to a first client corresponding to the first target virtual character; and

step S32, receiving second voice information of the second client corresponding to the second target virtual role forwarded by the voice server; wherein the second target avatar is determined based on location information of the controlled avatar of the second client in a game scene.

For the client, on one hand, the first voice information of the client needs to be sent to the voice server so that the voice server forwards the first voice information, and on the other hand, the client also needs to receive the second voice information sent by the voice server.

In one embodiment of the present disclosure, the method further comprises: sending a voice communication request with a voice area identifier to the voice server so that the voice server establishes voice communication connection between a voice area corresponding to the voice area identifier and the client in a preset mode; and sending the first voice information and receiving the second voice information by utilizing the voice communication request.

That is, voice information can be transmitted and received according to a communication connection only after the client establishes a voice communication connection with a corresponding voice section of the voice server. Specifically, the preset mode may be a voice SDK connection.

In one embodiment of the present disclosure, the method further comprises: generating a location synchronization request based on the location information of the controlled virtual character; and sending the position synchronization request to the voice server so that the voice server updates the position information of the controlled virtual character in a game scene, and further determining the first target virtual character according to the position information.

Specifically, after the client establishes a voice communication connection with the voice server, the client may interact with the voice server through the voice communication connection. Since the voice server needs to determine the listening area of the controlled virtual character and further determine the first target virtual character according to the position information of the controlled virtual character, the client can report the position information of the virtual character to the voice server at regular time.

In one embodiment of the present disclosure, the method further comprises: configuring hearing zone parameters of the controlled virtual character; sending the hearing area parameters to the voice server to cause the voice server to determine the first target avatar based on the hearing area parameters.

Since the voice server needs to determine the listening area of the controlled virtual character according to the hearing area parameter of the controlled virtual character, and further determines the first target virtual character, the client needs to send the hearing area parameter of the controlled virtual character to the voice server. The content of the hearing area parameters has been explained in detail before and is not described in further detail here.

In one embodiment of the present disclosure, the method further comprises: configuring a listening role upper limit value of the controlled virtual role; and sending the listening role upper limit value to the voice server so that the voice server judges whether second voice information of a second client corresponding to the second target virtual role needs to be received or not according to the listening role upper limit value.

Specifically, in order to improve the good voice interaction experience of the user in the game process and avoid the user from hearing too much voice information, the upper limit of the virtual character that the user can hear needs to be limited to ensure that the received voice does not exceed the load. Meanwhile, in order to save resource waste caused by voice forwarding, the target client can be filtered and forwarded again, and if the filtering is not passed, the forwarding is not performed.

Therefore, each client needs to configure the listening role upper limit value "maxuser" of its corresponding virtual role, that is, the value of the virtual role voice information that can be listened to at most in the virtual role listening area. After configuration, it can be sent to the voice server and stored in the AudioZone that manages the corresponding virtual role of the client.

In one embodiment of the present disclosure, before sending the voice communication request with the voice zone identification to the voice server, the method further comprises: sending a login request based on the controlled virtual role to the voice server to acquire a voice area identifier of a voice scene corresponding to the controlled virtual role; generating the voice communication request based on the voice zone identification.

Specifically, before sending a voice communication request to the voice server, the client first needs to obtain a voice area identifier returned by the voice server. Therefore, when the user operates the client to make the virtual character enter the game scene, the client can send the login request of the controlled virtual character to the voice server. The login request may include a scene identifier of the voice scene in which the virtual character is located, such as a scene name or a scene ID.

And then receiving a voice zone identifier returned by the voice server, namely the Audio zone address of the voice scene corresponding to the virtual role, so that the client generates a voice communication request based on the Audio zone address, and establishes communication connection with the Audio zone voice zone corresponding to the Audio zone address in the voice server.

In one embodiment of the present disclosure, before sending the voice communication request with the voice zone identification to the voice server, the method further comprises: sending a login request based on the controlled virtual character to a game server to acquire a voice zone identifier of a voice scene corresponding to the controlled virtual character returned by the game server through the voice server; generating the voice communication request based on the voice zone identification.

In the actual game running process, the request sent by the client is not necessarily trusted by the voice server in consideration of the problem of information security, so that the process of sending the request and receiving the result can be completed by adopting the game server agent.

The process is specifically as follows, when the client needs to make a login request of the controlled virtual character, firstly, the login request is sent to the game server, the game server completes information verification on the client and then transmits the login request to the voice server in an agent mode, the voice server responds to the login request and then returns the voice area identifier to the game server, and the game server synchronizes the voice area identifier to the client.

In one embodiment of the present disclosure, a voice server may be structurally composed of three parts, a core layer, a load balancing layer, and a storage layer.

The core layer is composed of a plurality of AudioZone services, and each AudioZone service manages a plurality of voice scene instances and users in the voice scene. The AudioZone provides a variety of thread information interactions, including:

(1) the main thread: managing TCP connection of a client, processing login verification of the client, and receiving a position synchronization message;

(2) qnet thread: the Qnet thread is mainly realized by receiving and sending voice packets through UDP (user Datagram protocol) and realizing a multi-thread structure;

(3) and (3) scene management thread: managing a plurality of voice scene instances in the process, wherein each voice scene instance maintains virtual scene map information of the voice scene and the position of each virtual role; receiving voice information of the client through Qnet, and synchronizing the voice information to the client through Qnet after AOI and secondary screening of scene examples;

(4) reporting the thread: and reporting the heartbeat of the virtual character and the information of the voice scene instance to ZoneDir at regular time.

The load balancing layer, ZoneDir. The method and the device are used for detecting the load of each AudioZone in a timing manner, and for an AudioZone exceeding a set threshold, sending a migration instruction to the AudioZone to enable part of voice scene instances to be migrated to other AudioZone services, so as to ensure that a server can provide enough capacity and realize dynamic load balancing.

Meanwhile, after detecting that the AudioZone service is disconnected, the ZoneDir reallocates the scene on the AudioZone to the normal AudioZone to provide a stable and available scene voice service.

In addition, ZoneDir selects a voice scene server with lower load for the newly created scene, so as to realize load balance and provide a server address query function for the existing scene instances.

The storage layer, Redis cluster, stores information of speech scene instances in the speech system. For example, the voice scene name "zonename", the voice scene type "zonetype", the audible character upper limit value "maxuser", the map information "maprange", the client list "clients", the server area "region", and the like.

Fig. 4 schematically illustrates a data interaction diagram of a regional voice communication method in an exemplary embodiment of the present disclosure. As shown in fig. 4, the specific process of the regional voice communication method is as follows:

step S401, a client sends a role login request to a game server to request to access a virtual role corresponding to the client to a voice scene;

step S402, the game server sends an http request to ZoneDir, wherein the request comprises information such as hearing area parameters, voice scene names and user IDs of the client;

step S403, the voice server determines an Audio zone address, searches for voice scene information according to the voice scene name in the request, creates a scene in the Audio zone with the lightest load if the scene does not exist, and takes out the Audio zone information of the scene if the scene exists;

step S404, the voice server returns the Audio zone address to the game server;

step S405, the game server synchronizes the Audio zone address to the client;

step S406, the client sends a voice communication request generated based on the Audio zone address to a voice server;

step S407, the voice server responds to the voice communication request, and the client is connected with the AudioZone by using the voice SDK;

step S408, the client sends the voice information to the voice server when receiving the voice information;

step S409, when the voice server receives the voice information, determining a target virtual role needing to hear the voice through AOI and secondary screening;

step S410, the voice server forwards the voice information to the target client corresponding to the target virtual role.

Based on the regional voice communication method, the AOI of the hearing is introduced, and then the sound is filtered by combining the parameters such as the volume, the distance, the speaking continuity and the like and then is synchronized to the client, so that the pressure of the server is reduced, the voice flow is reduced, and the voice quality is improved. In addition, under the condition of high service load, scene instances can be migrated among the servers in real time, so that the effect of load balancing is achieved, and more stable voice service is provided.

Fig. 5 schematically illustrates a composition diagram of a regional voice communication apparatus in an exemplary embodiment of the present disclosure, and as shown in fig. 5, the regional voice communication apparatus 500 may include an obtaining module 501, a calculating module 502, and a forwarding module 503. Wherein:

an obtaining module 501, configured to obtain voice information sent by a client;

a calculating module 502, configured to determine, based on location information of a controlled virtual character corresponding to the client in a game scene, a target virtual character located in a listening area of the controlled virtual character;

a forwarding module 503, configured to forward the voice information to a target client corresponding to the target virtual role.

According to an exemplary embodiment of the present disclosure, the regional voice communication apparatus 500 further includes a communication module (not shown in the figure) that voice-responds to a voice communication request with a voice zone identifier sent by a client, and identifies a voice zone corresponding to the voice zone identifier; and establishing voice communication connection between the voice area and the client according to a preset mode so as to acquire voice information sent by the client based on the voice communication connection.

According to an exemplary embodiment of the present disclosure, the calculation module 502 is further configured to calculate the listening area of the controlled virtual character based on the position information of the controlled virtual character in a game scene, the hearing area parameters of the controlled virtual character, and the map information of the game scene; and marking the virtual character in the listening area as the target virtual character.

According to an exemplary embodiment of the present disclosure, the forwarding module 503 further includes a screening unit (not shown in the figure) for obtaining a listening role upper limit value of the target virtual role; calculating the number of pre-received controlled virtual roles corresponding to the voice information pre-received by the target client; when the number of the pre-received controlled virtual roles is larger than the upper limit value of the listening role, judging whether the voice information needs to be forwarded or not; and when the voice information needs to be forwarded, forwarding the voice information to the target client corresponding to the target virtual role.

According to an exemplary embodiment of the present disclosure, the screening unit is further configured to calculate a weight value of the pre-received controlled avatar relative to the target avatar; wherein the weight value comprises any one or a combination of volume weight, continuity weight and distance weight of the pre-receiving controlled virtual character relative to the target virtual character of the voice information; sorting the pre-received controlled virtual roles based on the weight values, and selecting the pre-received controlled virtual roles with the same number as the upper limit value of the listening role as the controlled virtual roles to be received according to the sorting result; and when the pre-received controlled virtual role is the controlled virtual role to be received, judging that the voice information needs to be forwarded.

According to an exemplary embodiment of the present disclosure, the regional voice communication apparatus 500 further includes a first receiving module (not shown in the figure) for receiving a location synchronization request of the controlled virtual character sent by the client; and updating the position information of the controlled virtual character in the game scene based on the position synchronization request.

According to an exemplary embodiment of the present disclosure, the regional voice communication apparatus 500 further includes a second receiving module (not shown in the figure) for receiving the hearing region parameters of the pre-configured controlled virtual character sent by the client, so as to determine the target virtual character based on the hearing region parameters.

According to an exemplary embodiment of the disclosure, the communication module is configured to identify a voice scene corresponding to the controlled virtual role in response to a login request sent by the client based on the controlled virtual role; and returning the voice area identifier corresponding to the voice scene to the client so that the client generates the voice communication request based on the voice area identifier.

According to an exemplary embodiment of the disclosure, the communication module is further configured to identify a voice scene corresponding to the controlled virtual character in response to a login request based on the controlled virtual character, which is sent by the client through a game server agent; and returning the voice area identifier corresponding to the voice scene to the game server so that the game server is synchronized to the client and the client generates the voice communication request based on the voice area identifier.

According to an exemplary embodiment of the present disclosure, the communication module is further configured to query a voice scene instance database whether a target scene matching the voice scene exists; when the target scene exists, configuring the voice area identifier corresponding to the target scene as the voice area identifier corresponding to the voice scene for returning; and when the target scene does not exist, allocating a voice area for the voice scene, and configuring a voice area identifier corresponding to the allocated voice area as a voice area identifier corresponding to the voice scene for returning.

According to an exemplary embodiment of the present disclosure, the communication module is further configured to obtain load information of each of the speech regions; allocating the voice scene to the voice region with the lightest load based on the load information.

The specific details of each module in the regional voice communication apparatus 500 are already described in detail in the corresponding regional voice communication method, and therefore are not described herein again.

Fig. 6 schematically illustrates a composition diagram of a regional voice communication apparatus in an exemplary embodiment of the present disclosure, and as shown in fig. 6, the regional voice communication apparatus 600 may include a transmitting module 601 and a receiving module 602. Wherein:

a sending module 601, configured to send first voice information to a voice server, so that the voice server determines a first target virtual character located in a listening area of a controlled virtual character based on position information of the controlled virtual character in a game scene, and forwards the first voice information to a first client corresponding to the first target virtual character; and

a receiving module 602, configured to receive second voice information of a second client corresponding to a second target virtual role forwarded by the voice server; wherein the second target avatar is determined based on location information of the controlled avatar of the second client in a game scene

According to an exemplary embodiment of the present disclosure, the regional voice communication apparatus 600 further includes a first transmitting module (not shown in the figure) for generating a location synchronization request based on the location information of the controlled virtual character; and sending the position synchronization request to the voice server so that the voice server updates the position information of the controlled virtual character in a game scene, and further determining the first target virtual character according to the position information.

According to an exemplary embodiment of the present disclosure, the regional voice communication apparatus 600 further includes a second transmitting module (not shown in the figure) for configuring hearing region parameters of the controlled virtual character; sending the hearing area parameters to the voice server to cause the voice server to determine the first target avatar based on the hearing area parameters.

According to an exemplary embodiment of the present disclosure, the regional voice communication apparatus 600 further includes a third transmitting module (not shown in the figure) for configuring a listening character upper limit value of the controlled virtual character; and sending the listening role upper limit value to the voice server so that the voice server judges whether second voice information of a second client corresponding to the second target virtual role needs to be received or not according to the listening role upper limit value.

According to an exemplary embodiment of the present disclosure, the regional voice communication apparatus 600 further includes a first generating module (not shown in the figure) configured to send a login request based on the controlled virtual character to a voice server before sending a voice communication request with a voice zone identifier to the voice server to obtain a voice zone identifier of a voice scene corresponding to the controlled virtual character; generating the voice communication request based on the voice zone identification.

According to an exemplary embodiment of the present disclosure, the regional voice communication apparatus 600 further includes a second generating module (not shown in the figure), configured to send a login request based on the controlled virtual character to a game server before sending a voice communication request with a voice zone identifier to the voice server, so as to obtain the voice zone identifier of the voice scene corresponding to the controlled virtual character returned by the game server through the voice server; generating the voice communication request based on the voice zone identification.

The specific details of each module in the regional voice communication apparatus 600 have been described in detail in the corresponding regional voice communication method, and therefore are not described herein again.

It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.

In an exemplary embodiment of the present disclosure, there is also provided a storage medium capable of implementing the above-described method. Fig. 7 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure. As shown in fig. 7, a program product 700 for implementing the above method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a cell phone. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided. Fig. 8 schematically shows a structural diagram of a computer system of an electronic device in an exemplary embodiment of the disclosure.

It should be noted that the computer system 800 of the electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.

As shown in fig. 8, a computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for system operation are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An Input/Output (I/O) interface 805 is also connected to bus 804.

The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.

In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. When the computer program is executed by a Central Processing Unit (CPU)801, various functions defined in the system of the present disclosure are executed.

It should be noted that the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.

As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.

It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.

It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:游戏数据处理方法及装置、存储介质、电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类