Method and device for processing view mode in game, storage medium and terminal equipment

文档序号:866371 发布日期:2021-03-19 浏览:10次 中文

阅读说明:本技术 游戏中的视野模式处理方法、装置、存储介质及终端设备 (Method and device for processing view mode in game, storage medium and terminal equipment ) 是由 张耀中 曹伟刚 于 2020-12-15 设计创作,主要内容包括:本申请实施例公开一种游戏中的视野模式处理方法、装置、存储介质及终端设备。该方法包括:响应于视野模式配置触发请求,触发图形用户界面上显示与游戏作品对应的视野模式配置界面,视野模式配置界面上包含有多个地图视野选项;然后响应在游戏设置界面上输入的配置编辑信息,配置游戏作品的视野模式,以获取配置后的目标视野模式;将配置有目标视野模式的游戏作品发送至服务器,以通过服务器发布配置有目标视野模式的游戏作品;当运行配置有目标视野模式的游戏作品时,获取游戏作品中的虚拟角色的视野数据,根据视野数据以及目标视野模式,生成目标视图并进行显示。可在游戏创建时灵活自由地配置多种视野模式,及在游戏运行时动态更改相关配置。(The embodiment of the application discloses a method and a device for processing a visual field mode in a game, a storage medium and terminal equipment. The method comprises the following steps: responding to the visual field mode configuration triggering request, triggering a visual field mode configuration interface corresponding to the game composition to be displayed on the graphical user interface, wherein the visual field mode configuration interface comprises a plurality of map visual field options; then responding to configuration editing information input on a game setting interface, and configuring a visual field mode of the game works to acquire a configured target visual field mode; sending the game works configured with the target view mode to a server so as to distribute the game works configured with the target view mode through the server; when a game work configured with a target view mode is operated, view data of a virtual character in the game work is acquired, and a target view is generated and displayed according to the view data and the target view mode. The multiple view modes can be flexibly and freely configured during game creation, and the related configuration can be dynamically changed during game operation.)

1. A visual field mode processing method in a game is applied to a terminal device, the terminal device is deployed with an application for game creation, and the terminal device provides a graphical user interface when running the application, and the method comprises the following steps:

responding to a visual field mode configuration triggering request, triggering a visual field mode configuration interface corresponding to the game composition to be displayed on the graphical user interface, wherein the visual field mode configuration interface comprises a plurality of map visual field options;

responding to configuration editing information input on the game setting interface, and configuring the visual field mode of the game works to acquire a configured target visual field mode;

transmitting the game work configured with the target view mode to a server to distribute the game work configured with the target view mode through the server;

and when the game work configured with the target view mode is operated, obtaining view data of virtual characters in the game work, and generating and displaying a target view according to the view data and the target view mode.

2. An in-game perimetral mode processing method according to claim 1, wherein said configuration editing information includes a selection operation, said configuring perimetral mode of said game composition in response to configuration editing information entered on said game setting interface to obtain a configured target perimetral mode, comprising:

and responding to the selection operation of one or more visual field options on the game setting interface, and configuring the visual field mode of the game composition to acquire a configured target visual field mode.

3. An in-game view mode processing method according to claim 2, wherein said configuration edit information further includes a property edit operation, said configuring a view mode of said game composition in response to configuration edit information input on said game setting interface to acquire a configured target view mode, further comprising:

and responding to the attribute editing operation acted on the game setting interface, and configuring the attribute information of the target view mode.

4. An in-game visual field pattern processing method according to claim 3, wherein the attribute information of the target visual field pattern includes at least one of:

a view blockage of the terrain;

blocking the view of the ornament;

visibility of the decoration;

a field of view of the unit;

the concentration of the dense fog;

the haze concentration of the minimap.

5. A in-game visual field mode processing method according to claim 2, wherein the configuration edit information further includes an editing operation of visual field change trigger information, and the visual field mode of the game composition is configured in response to the configuration edit information input on the game setting interface to acquire the configured target visual field mode, further comprising:

and configuring the visual field change trigger information of the target visual field mode in response to the editing operation of the visual field change trigger information acted on the game setting interface.

6. A in-game visual field mode processing method according to claim 2, wherein the configuration edit information includes configuration information of a visual field mode switching instruction, and the visual field mode of the game composition is configured in response to the configuration edit information input on the game setting interface to acquire the configured target visual field mode, further comprising:

and responding to the configuration information of the visual field mode switching instruction input on the game setting interface, and configuring the visual field mode switching instruction of the game composition.

7. A visual field pattern processing method in a game according to claim 1, wherein the acquiring visual field data of a virtual character in the game composition, and generating and displaying a target view based on the visual field data and the target visual field pattern, further comprises:

and when the target view mode is a terrain fog pattern mode, generating and displaying a target view according to the acquired historical view data of the virtual character of the local party in the game works, wherein the target view has terrain fog.

8. A visual field pattern processing method in a game according to claim 1, wherein the acquiring visual field data of a virtual character in the game composition, and generating and displaying a target view based on the visual field data and the target visual field pattern, further comprises:

and when the target view mode is the view fog pattern, generating and displaying a target view according to the acquired historical view data of the virtual character of the local formation in the game works and the current view data of the virtual character controlled by the user, wherein the target view has view fog.

9. A visual field pattern processing method in a game according to claim 1, wherein the acquiring visual field data of a virtual character in the game composition, and generating and displaying a target view based on the visual field data and the target visual field pattern, further comprises:

and when the target view mode is a double-layer fog pattern mode, generating and displaying a target view according to the acquired historical view data of the virtual character of the local party in the game work and the current view data of the virtual character controlled by the user, wherein the target view has view fog and terrain fog.

10. The in-game visual field pattern processing method according to claim 8 or 9, wherein, when generating and displaying a target view based on the acquired historical visual field data of the virtual character banked in the game composition and the current visual field data of the virtual character controlled by the user, the method further comprises:

prior to displaying the target view, performing speckle filtering on the target view.

11. An in-game perimetry mode processing method according to claim 10, wherein said speckle filtering of said target view comprises:

acquiring a fog-masking texture color value of a previous frame and a fog-masking texture color value of a current frame of the target view;

and performing speckle filtering processing on the target view according to the previous frame fog masking texture color value, the current frame fog masking texture color value and a preset target value difference value.

12. The in-game visual field pattern processing method according to any one of claims 7 to 9, the method further comprising:

when the game works configured with the target view mode are operated, the current view data of the virtual character of the local camp in the game works are acquired, and the historical view data of the virtual character of the local camp is updated according to the current view data of the virtual character of the local camp.

13. The in-game visual field pattern processing method according to any one of claims 7 to 9, the method further comprising: when the target map is generated, filtering the fog concentration of the target map according to a preset filtering rule, wherein the preset filtering rule comprises that if the fog concentration in the target view is smaller than a first threshold value, the fog concentration of the target map is reduced to a first concentration value, and if the fog concentration in the target view is larger than a second threshold value, the fog concentration of the target map is increased to a second concentration value.

14. An in-game perimetry mode processing method according to claim 1, characterized in that said method further comprises:

when the game works configured with the target view mode are operated, the current view data of the virtual character of the local camp in the game works are acquired, and the historical view data of the virtual character of the local camp is updated according to the current view data of the virtual character of the local camp.

15. An in-game perimetry mode processing method according to claim 1, characterized in that said method further comprises:

and when the game works configured with the target view mode are operated and the matched view change triggering information is acquired, triggering the dynamic change of the view mode of the game product to generate and display a target view with the dynamically changed view.

16. An in-game perimetry mode processing method according to claim 1, characterized in that said method further comprises:

and when the game works configured with the target view mode are operated and a view mode switching instruction input by a user is acquired, switching the view mode of the game works according to the view mode switching instruction.

17. A visual field pattern processing apparatus in a game, applied to a terminal device, the terminal device being deployed with an application for game creation, the terminal device providing a graphical user interface when running the application, the apparatus comprising:

the display module is used for responding to a visual field mode configuration triggering request and triggering the graphical user interface to display a visual field mode configuration interface corresponding to the game composition, and the visual field mode configuration interface comprises a plurality of map visual field options;

the configuration module is used for responding to configuration editing information input on the game setting interface and configuring the visual field mode of the game works so as to obtain a configured target visual field mode;

the sending module is used for sending the game works configured with the target view mode to a server so as to release the game works configured with the target view mode through the server;

and the processing module is used for acquiring the visual field data of the virtual character in the game work when the game work configured with the target visual field mode is operated, and generating and displaying a target view according to the visual field data and the target visual field mode.

18. A computer-readable storage medium, characterized in that it stores a computer program adapted to be loaded by a processor to perform the steps of the in-game visual field pattern processing method according to any one of claims 1 to 16.

19. A terminal device characterized by comprising a memory in which a computer program is stored and a processor that executes the steps in the in-game visual field pattern processing method according to any one of claims 1 to 16 by calling the computer program stored in the memory.

Technical Field

The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing a view mode in a game, a storage medium, and a terminal device.

Background

The war is a fog-lost mechanism for making the tactical unpredictability of two parties in a strategic type game, for example, the distribution and activity of the enemy in most areas except the own party cannot be confirmed because the enemy is not clear of the intelligence in the war. War haziness has become an indispensable key part of games, which can greatly enhance the exploratory and strategic nature of games. Common types of mist are divided into two categories: terrain fog and field fog. The terrain fog is that a layer of black fog is covered in a designated area in a game scene, and the terrain in the area is invisible and represents that the terrain is not explored. Visual field fog is that a layer of light fog is covered outside the visual field range of the unit of the own square, which represents that the area has no visual field. In the current game market, the fog pattern configuration mode is single, and the visual field mode cannot be flexibly and freely configured.

Disclosure of Invention

The embodiment of the application provides a method, a device, a storage medium and a terminal device for processing view modes in a game, which can flexibly and freely configure various view modes in the game creation process and can dynamically change the relevant configuration of the view modes in the game running process.

The embodiment of the application provides a visual field mode processing method in a game, which is applied to terminal equipment, wherein the terminal equipment is deployed with an application for game creation, and the terminal equipment provides a graphical user interface when running the application, and the method comprises the following steps:

responding to a visual field mode configuration triggering request, triggering a visual field mode configuration interface corresponding to the game composition to be displayed on the graphical user interface, wherein the visual field mode configuration interface comprises a plurality of map visual field options;

responding to configuration editing information input on the game setting interface, and configuring the visual field mode of the game works to acquire a configured target visual field mode;

transmitting the game work configured with the target view mode to a server to distribute the game work configured with the target view mode through the server;

and when the game work configured with the target view mode is operated, obtaining view data of virtual characters in the game work, and generating and displaying a target view according to the view data and the target view mode.

An embodiment of the present application further provides a device for processing a view mode in a game, where the device is applied to a terminal device, the terminal device is deployed with an application for creating the game, and the terminal device provides a graphical user interface when running the application, and the device includes:

the display module is used for responding to a visual field mode configuration triggering request and triggering the graphical user interface to display a visual field mode configuration interface corresponding to the game composition, and the visual field mode configuration interface comprises a plurality of map visual field options;

the configuration module is used for responding to configuration editing information input on the game setting interface and configuring the visual field mode of the game works so as to obtain a configured target visual field mode;

the sending module is used for sending the game works configured with the target view mode to a server so as to release the game works configured with the target view mode through the server;

and the processing module is used for acquiring the visual field data of the virtual character in the game work when the game work configured with the target visual field mode is operated, and generating and displaying a target view according to the visual field data and the target visual field mode.

The present application further provides a computer-readable storage medium, where a computer program is stored, where the computer program is suitable for being loaded by a processor to execute the steps in the method for processing the view mode in the game according to any of the above embodiments.

The embodiment of the present application further provides a terminal device, where the terminal device includes a memory and a processor, where the memory stores a computer program, and the processor executes the steps in the method for processing the in-game view mode according to any of the above embodiments by calling the computer program stored in the memory.

According to the method, the device, the storage medium and the terminal equipment for processing the view mode in the game, a graphical user interface is provided through an application which is deployed on the terminal equipment and used for game creation, firstly, a view mode configuration triggering request is responded, a view mode configuration interface corresponding to the game work is triggered to be displayed on the graphical user interface, and the view mode configuration interface comprises a plurality of map view options; then responding to configuration editing information input on the game setting interface, and configuring the visual field mode of the game works to acquire a configured target visual field mode; sending the game work configured with the target view mode to a server so as to release the game work configured with the target view mode through the server; and when the game work configured with the target view mode is operated, obtaining view data of virtual characters in the game work, and generating and displaying a target view according to the view data and the target view mode. The embodiment of the application can flexibly and freely configure various visual field modes in the game creation process, and can dynamically change the relevant configuration of the visual field modes in the game running process.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a schematic structural diagram of a system for processing a view mode in a game according to an embodiment of the present disclosure.

Fig. 2 is a flowchart illustrating a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 3 is a schematic view of a first application scenario of a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 4 is a schematic view of a second application scenario of the in-game view mode processing method according to the embodiment of the present application.

Fig. 5 is a schematic view of a third application scenario of a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 6 is a schematic view of a fourth application scenario of a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 7 is a schematic view of a fifth application scenario of a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 8 is a schematic view of a sixth application scenario of a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 9 is a schematic view of a seventh application scenario of a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 10 is a schematic view of an eighth application scenario of a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 11 is a schematic view of a ninth application scenario of a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 12 is a schematic view of a tenth application scenario of a method for processing a view mode in a game according to an embodiment of the present application.

Fig. 13 is a schematic structural diagram of a view mode processing device in a game according to an embodiment of the present application.

Fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Currently available applications can provide game creation services. In these applications, the player can design and release the game as an author using the editing tools provided by the application. The editing tool is a tool set which can be used by players to exert imagination, change the original game design, reconstruct the game environment and create game playing methods. Other players may play published games in the application. For example, the application is a battle platform, and the editing tool is an editor built in the battle platform. In the editor, the player can freely edit the copy, the playing method, the level, the plot and the like of the designated hand game through the interfacing tool without programming the base. In the embodiment of the application, for the game works created by the authors, the authors can flexibly and freely configure the view mode for the game works, and the related configuration of the view mode can be dynamically changed in the game process.

Wherein the editor may further comprise a trigger. The trigger is a game logic editing tool and is used for managing game rules, game logics, game flows and the like, and a user can use the trigger to write interfaces in a game which is pre-made by a program according to certain program logics so as to customize the game logics. The trigger comprises three parts of events, conditions and actions, any one event in the trigger is triggered, and after the conditions are met, the corresponding actions are executed in sequence according to a preset sequence. Events are used to determine when a trigger will execute. The condition is used for selecting whether to add a trigger execution condition, and the trigger action is executed when the event is triggered only when the condition is satisfied. Actions are used to represent a list of actions that are executed when an event triggers and conditions are met.

Referring to fig. 1, fig. 1 is a schematic diagram illustrating an architecture of a system for processing a view mode in a game according to an embodiment of the present disclosure. The system includes a terminal device 100 and a server 200. The terminal device 100 may be a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart wearable device, and the like, which are not limited herein. The server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content distribution network service, a big data and artificial intelligence platform, and the like. Wherein an application for game creation is deployed on the terminal device 100. The user can open the application, create the game works through the interface editing tool in the application, send the created game works to the server 200 through the network, and the server 200 publishes the game works to the game hall of the application, so that other players can log in the application to play the published game works.

In addition, a view mode configuration interface is also provided in the application, a user can trigger to enter the view mode configuration interface to freely configure a view mode for a game composition, the terminal device 100 sends the configured game composition to the server 200, the server 200 distributes the game composition to a game hall for the user and other players to view, monitors the player data of the game, and updates the historical view data and refreshes the view map.

Referring to fig. 2 to 12, fig. 2 is a schematic flowchart of a processing method of a view mode in a game according to an embodiment of the present application, and fig. 3 to 12 are application scenarios of the processing method of the view mode in the game according to the embodiment of the present application. The method is applied to the terminal equipment, the terminal equipment is deployed with the application for game creation, the terminal equipment provides a graphical user interface when running the application, and the specific flow of the method can be as follows:

step 101, responding to a visual field mode configuration triggering request, triggering the graphical user interface to display a visual field mode configuration interface corresponding to the game composition, wherein the visual field mode configuration interface comprises a plurality of map visual field options.

For example, the terminal device provides a graphical user interface when running the application. An editor of game creation may be provided on the graphical user interface for the user to play the game creation. Firstly, when a work creation request input by a user is responded, a game work is created on the graphical user interface, and a setting interface for displaying the newly created game work is triggered, and the user can set related game work on the setting interface. For example, the setting interface may include a scene editing interface, a trigger logic editing interface, and other setting interfaces. For example, the scene editing interface may include editing interfaces for scenes such as terrain, ornaments, combat units (combat characters), functional areas, shots, and the like. For example, other setting interfaces may include corresponding interfaces for custom properties, run, test, game settings, skill settings, and the like.

The editing device comprises an editor, a visual field mode setting module and a visual field mode setting module, wherein a corresponding setting interface is provided in the editor and used for a user to select the visual field mode, set relevant attributes of the visual field mode and the like, so that the user can rapidly configure the visual field mode conveniently, and the configuration mode is static configuration. Specifically, a view mode configuration interface corresponding to the game composition is triggered to be displayed on the graphical user interface in response to a view mode configuration trigger request, and the view mode configuration interface includes a plurality of map view options so as to provide view mode selection configuration on the map view configuration interface of the editor.

For example, the request for triggering the configuration of the view mode may be a request triggered by voice, a key press, or a control click, or a text input, which is not limited herein. For example, a trigger control for view mode configuration may be provided on the graphical user interface, and when the user clicks the trigger control, the terminal device may receive a view mode configuration trigger request corresponding to the trigger control, so as to display the view mode configuration interface according to the request.

And 102, responding to configuration editing information input on the game setting interface, and configuring the visual field mode of the game works to acquire a configured target visual field mode.

Wherein the configuration editing information includes a selection operation, and the configuring the view mode of the game composition in response to the configuration editing information input on the game setting interface to obtain a configured target view mode includes:

and responding to the selection operation of one or more visual field options on the game setting interface, and configuring the visual field mode of the game composition to acquire a configured target visual field mode.

For example, as shown in fig. 3, the map view configuration interface includes a plurality of map view options, such as a view wide open option, a view fog option, and a terrain fog option. For example, if the all-open-view option is selected as on, the other team players can be directly seen on the small map after the all-open-view option is turned on. For example, if the field of view fog option is selected to be on, then after the field of view fog option is on, the field of view fog effect is available outside the own field of view, and the field of view is dynamically refreshed. For example, if the terrain fog option is selected to be on, after the terrain fog option is turned on, the terrain will be covered by a layer of black fog, and the area searched by the team of the own party is the real view. The user can realize the configuration of four view modes by selecting a plurality of map view options on the map view configuration interface, so that the terminal equipment responds to the selection operation of the user on the map view options on the map view configuration interface: a visual field fog pattern, a terrain fog pattern, a double-layer fog pattern, and a fog-free pattern. For example, when the user only clicks on the field-of-view fogging option, the terminal device configures the field-of-view mode of the game piece to the field-of-view fogging mode, that is, only the field-of-view fogging, in response to a selection operation by the user on the field-of-view fogging option. For example, when the user only clicks on the terrain fog option, the view mode of the game piece is configured to be the terrain fog mode, that is, only the terrain fog, in response to the user's selection operation on the terrain fog option. For example, when the user simultaneously clicks the visual field fog-obscuring option and the terrain fog-obscuring option, the terminal device configures the visual field mode of the game piece into a double-layer fog-obscuring mode, namely, having both the terrain fog-obscuring and the visual field fog-obscuring, in response to the user's selection operation on the visual field fog-obscuring option and the terrain fog-obscuring option. For example, when the user hooks only the all-view option, the terminal device configures the view mode of the game piece to the fog-free mode in response to a selection operation by the user on the all-view option. For example, if the all-open-view option is selected, when the user further selects one of the fog-over-view option and the fog-over-terrain option, the all-open-view option that has been selected before automatically cancels the selection. Similarly, after any one of the visual field fog dispersal options or the terrain fog dispersal options is selected, when the user selects the visual field full-open option again, the selected visual field fog dispersal option or terrain fog dispersal option automatically cancels the selection.

The visual field fog pattern, the terrain fog pattern and the double-layer fog pattern in the visual field pattern can be collectively referred to as the fog pattern.

Optionally, the configuration editing information further includes an attribute editing operation, and the configuration editing information input on the game setting interface is responded to configure the view mode of the game composition to obtain a configured target view mode, further including:

and responding to the attribute editing operation acted on the game setting interface, and configuring the attribute information of the target view mode.

Wherein, an editing interface of the fog-related attribute information can be provided on the graphical user interface. The player may set a field of view for a certain unit type, may set whether a specific decoration in the scene needs to block the field of view, may set a fog density, or the like.

Optionally, the attribute information of the target view mode includes at least one of:

a view blockage of the terrain;

blocking the view of the ornament;

visibility of the decoration;

a field of view of the unit;

the concentration of the dense fog;

the haze concentration of the minimap.

For example, the attribute information of the target view field pattern may include information such as a view field blockage of a terrain, a view field blockage of an ornament, a visibility of an ornament, a view field range of a unit, a fog density, and a fog density of a small map.

For example, taking the attribute editing operation for view blocking of the ornament as an example, when the target view mode is the fog pattern mode, that is, when the target view mode is any one of the view fog pattern, the terrain fog pattern, and the double-layer fog pattern, it may be set whether or not the view is blocked by the ornament in the fog pattern mode. For example, in the fog dispersal mode, a decoration editing interface may be provided on the graphical user interface, as shown in fig. 4, an editing area related to view obstruction is provided on the decoration editing interface, and a view obstruction option is provided in the editing area, and the view obstruction option is used for the user to select whether the decoration blocks the view in the fog dispersal mode, such as a tree (decoration) shown in fig. 3, when the user clicks "block the view in the fog dispersal mode", the terminal device receives a property editing operation related to the blocked view of the decoration, and at this time, the property information of the blocked view of the decoration by the terminal device is recorded as that the view is blocked in the fog dispersal mode. If the user does not check the 'blocked view field in the fog dispersal mode', the attribute information of the blocked view field of the ornament, recorded by the terminal equipment, has no view field attribute, and does not block the view field in the fog dispersal mode.

For example, taking the attribute editing operation of the unit field of view as an example, when the target field of view mode is the fog pattern mode, that is, when the target field of view mode is any one of the field of view fog pattern, the terrain fog pattern, and the double-layer fog pattern, the unit field of view in the fog pattern may be set. For example, in the fog dispersal mode, a fighting unit editing interface may be provided on the graphical user interface, as shown in fig. 5, a unit field of view editing control is provided on the fighting unit editing interface, the field of view editing control may be an input box, an option box, or the like, the field of view editing control is used for the user to set the field of view of the unit, for example, when the value indicating the field of view input by the user is 60, the terminal device may receive a property editing operation on the field of view of the unit, and at this time, the property information of the field of view of the unit by the terminal device is recorded as 60 in the fog dispersal mode. For example, when the value indicating the visual field range input by the user is 25, the terminal device receives an attribute editing operation on the visual field range of the unit, and the attribute information of the visual field range of the unit recorded by the terminal device is 25 for the visual field range of the unit in the fog dispersal mode. Fig. 6 shows a target view of a black boy with a view field of 60 when the game piece configured with the fog pattern is operated, and fig. 7 shows a target view of a black boy with a view field of 25 when the game piece configured with the fog pattern is operated.

For example, taking the attribute editing operation of the fog density as an example, the fog density may be set when the target view field mode is the view field fog pattern double-layer fog pattern. Specifically, a fog attribute editing interface can be provided on the graphical user interface, for example, the fog attribute editing interface can be displayed when a field fog option is selected, and the fog attribute editing interface is used for a user to edit fog density so as to meet the requirements of different games on the fog shading degree. The higher the concentration, the darker the fog of the scene and the small map, and when the concentration is 100%, the fog area of the visual field fog is completely black. For example, the control for conducting the fog density editing may be a progress bar control or an input box control. The progress bar control can let the user adjust the percentage control progress bar of progress bar and show the dense fog concentration, and terminal equipment can receive the attribute editing operation about the dense fog concentration to record the dense fog concentration value that this attribute editing operation contains, that is, terminal equipment sets up corresponding dense fog concentration through the percentage of obtaining the progress bar that the user adjusted. The input box control can enable a user to input characters representing the fog concentration value, and the terminal equipment sets the corresponding fog concentration value by acquiring the characters input by the user.

Optionally, the configuration editing information further includes editing operation of view change triggering information, and the configuring and editing information input on the game setting interface is responded to configure the view mode of the game composition to obtain a configured target view mode, further includes:

and configuring the visual field change trigger information of the target visual field mode in response to the editing operation of the visual field change trigger information acted on the game setting interface.

For example, the view mode may be set to a dynamically changing situation during the execution of the game composition. For example, only terrain fog is generated when the game works are initially run, and after a player explores a map, the field of vision fog is generated to be confronted; the player can keep the map full-bright for 2 seconds by using a certain prop and then suddenly restore the fog state. In the above situation, the visual field pattern is not uniform, and changes from terrain fog to visual field fog, then changes to no fog, and finally changes to dynamic visual field change of visual field fog. Therefore, the view change triggering information of the target view mode needs to be configured through the trigger so as to provide the execution nodes and parameters of the game logic related to the view mode, and the view mode needs to be dynamically configured in the running process of the game composition.

The editing operation of the visual field change triggering information can be realized by a user editing trigger, a trigger logic editing interface can be provided on a graphical user interface, the trigger can comprise three parts of events, conditions and actions, any event in the trigger is triggered, and after the conditions are met, corresponding actions can be executed in sequence according to a preset sequence. The events, conditions, actions may constitute execution nodes of the trigger. For example, one of the triggers sets an execution node of the terrain fog, and a user can input the execution node and parameters about the terrain fog through the trigger, and the terrain fog is turned on or off when an event triggers and a condition is satisfied. As shown in fig. 8, the execution nodes of the set terrain fog are: within the first 10 minutes of game start, every 1 minute, the map is fully lit for 1 second. Specifically, the set event is triggered 1 time, 10 times in total, every time 60 seconds passes by the game event; the action set is to turn off the terrain fog, wait 1 second, and then turn on the terrain fog.

Wherein, the parameter related to the terrain fog can be historical visual field data.

For example, a user may input corresponding visual field change trigger information on a graphical user interface according to a requirement of the user during a game running process to implement the following game logic: opening/closing terrain fogs, opening/closing field of view fogs, setting fog concentration, clearing historical field of view, using full field of view, setting unit field of view range, etc. The visual field change trigger information is set through the trigger, and a high-freedom configurable visual field mode can be realized in the hand-trip built-in editor. After the user releases the game composition, the user himself or other players can experience various play methods related to war fog in the game hall.

Wherein, the configuration editing information includes configuration information of a view mode switching instruction, and the configuration editing information input on the game setting interface is responded to configure the view mode of the game composition to obtain a configured target view mode, further including:

and responding to the configuration information of the visual field mode switching instruction input on the game setting interface, and configuring the visual field mode switching instruction of the game composition.

For example, when a user creating a game composition modifies the validated view mode to a view mode that allows other players and the user himself/herself, a corresponding view mode switching instruction is set in the editor, so that the user can change the view mode of the game composition by inputting the corresponding view mode switching instruction during the operation of the game composition. For example, a numeral 1 is set as a view mode switching instruction corresponding to a terrain fog pattern; setting the number 2 as a visual field mode switching instruction corresponding to the visual field fog pattern; setting the number 3 as a visual field mode switching instruction corresponding to the double-layer fog pattern; the numeral 4 is set as a view mode switching instruction corresponding to the fog-free mode.

Step 103, sending the game composition configured with the target view mode to a server, so as to distribute the game composition configured with the target view mode through the server.

Specifically, the user sends the game work with the view mode set to the server, and the server issues the game work to the game hall of the application, so that other players can log in the application to play the issued game work. Wherein, the user who creates the game work can start the game from the game hall to play the issued game work.

And 104, when the game works configured with the target view mode are operated, obtaining view data of virtual characters in the game works, and generating and displaying a target view according to the view data and the target view mode.

The terminal device may start and run the game composition configured with the target view mode from the game hall, the started and run game composition may be used for a plurality of user players to play a battle game, view data of all virtual characters in the game composition may be acquired during a game process, and the view data may include historical view data and current view data of all virtual characters in the game composition. The visual field data corresponding to each virtual role carries identification information representing the camp to which the virtual role belongs so as to distinguish the visual field data of own camp and the visual field data of non-own camp.

Optionally, the obtaining view data of the virtual character in the game composition, generating a target view according to the view data and the target view mode, and displaying the target view, further includes:

and when the target view mode is a terrain fog pattern mode, generating and displaying a target view according to the acquired historical view data of the virtual character of the local party in the game works, wherein the target view has terrain fog.

The terrain fog is very suitable for playing diagrams of puzzle-solving exploration and search types, and refers to a process that a map is initially completely black and unexplored, is covered by a layer of black fog, and gradually becomes dark and bright when units (virtual characters) with visual fields are detected on the map and then have corresponding explored areas.

For example, an additional target storage area may be created in the terminal device or the server for recording the historical visual field data, and the places in the scene without the historical visual field data are fog areas. As shown in fig. 9, the black portion represents an area that has not been searched (without the historical visual field data), and the white portion represents a searched area (with the historical visual field data). The change process of the terrain fog is as follows: the initial state is completely black, the visual field of the area searched by the own party is completely open, the white part is gradually enlarged along with the enlargement of the searched area, and the whole map is completely bright when all the areas are searched. The change process is reversible, namely the map can be changed from full brightness to full black again, and the logic node of the trigger is operated to delete the historical view data of the designated array, so that the searched area is reset to be in the terrain fog-lost state.

The difference between terrain fog and visual field fog is that the detected scene remains full bright all the time, and non-local units in the area are visible.

For example, when the game work configured with the target view field mode is operated, and when the target view field mode is the terrain fog pattern, the terminal device generates a target view with terrain fog according to the acquired historical view field data of virtual characters in the game work which have been banked by the party, and displays the target view on a graphical user interface.

Optionally, the obtaining view data of the virtual character in the game composition, generating a target view according to the view data and the target view mode, and displaying the target view, further includes:

and when the target view mode is the view fog pattern, generating and displaying a target view according to the acquired historical view data of the virtual character of the local formation in the game works and the current view data of the virtual character controlled by the user, wherein the target view has view fog.

The visual field fog is that a layer of light fog is covered outside the visual field range of the unit of the own square, and the visual field fog represents that the area has no visual field. For example, when the game work configured with the target view field mode is operated, and when the target view field mode is the view field fog pattern mode, the terminal device generates a target view with view field fog according to the acquired historical view field data of virtual characters in the game work and the current view field data of the virtual characters controlled by the user, and displays the target view on a graphical user interface.

Optionally, the obtaining view data of the virtual character in the game composition, generating a target view according to the view data and the target view mode, and displaying the target view, further includes:

and when the target view mode is a double-layer fog pattern mode, generating and displaying a target view according to the acquired historical view data of the virtual character of the local party formation in the game works and the current view data of the virtual character controlled by the user, wherein the target view has view fog and terrain fog.

For example, when both the visual field fog and the terrain fog are realized, a double-layer fog, i.e., a fog pattern in which both the fog exist, still needs to be realized. As shown in fig. 10, the double-layer mist is realized by the following principle:

m1: historical view data representing avatars in a private campsite, M1 includes two parts of data (M0, M1-M0);

m0: current field of view data representing a user-controlled virtual character, wherein M0 is a real field of view;

M1-M0: the visual field data which are already explored by own marketing is shown, visual field fog exists, and the landform can be seen;

full map-M1: the unexplored visual field data of own camp has visual field fog, topographic fog, 40674m and nothing in black.

The texture formed by the M0 data is a normal texture, i.e., a real field of view. The texture formed by M1-M0 is a semi-transparent texture, i.e., a pseudo-field of view, visible terrain, invisible units (avatars). The other areas except the M1 data are all black maps. After M1-M0 was completed, M1 was M1| M0, and union was performed.

In the target view shown in fig. 11, there is a clear bright circle around the dog's mind, corresponding to the M0 real field of view; a dark piece of grass around it, corresponding to (M1-MO), with a thin field of vision haze; there are also large areas of black area, represented (full map-M1), representing terrain fog areas that have not yet been explored. In this way, four kinds of fog dispersal patterns are achieved in the target view: only visual field fog, terrain fog, double-layer fog and no fog.

In particular, because of the real field of view involved, field of view calculations are required. A memory area copy can be created additionally in the terminal device or the server to record the historical view data, the size of the historical view data is nx _ ny _ Byte, only one mark needs to be made, and if the historical view data is reserved, the historical view data in the memory area copy is not covered. Wherein nx _ represents the number of columns in which the number of lattices for the history view data is recorded; ny _ represents the number of lines for recording the number of lattices of the historical visual field data; nx _ ny _ represents a total of nx _ ny _ cells of the grid for recording the historical visual field data; since one Byte is required for the historical view data of each view cell, a total of nx _ ny _ bytes, that is, nx _ ny _ Byte, is required.

If historical view data are not recorded, updating the view data of the whole map by using the currently calculated static view data; if the historical view data is calculated, the data in the memory area copy is not updated, namely the historical view data is kept.

When the static view data changes, the static view data needs to be written into the current view data after being recalculated, and an or operation is performed. If the historical view data is recorded, each grid of the current map view data is traversed, and the data of the current map view is updated by the historical view data.

In the synchronization method, the client of the terminal device does not calculate the visual field data of other campuses, and specifically, if the synchronization method does not have terrain fog, the visual field data of other campuses does not need to be calculated. The synchronization book is a version that needs to be uploaded to a server in a terminal device, for example, the synchronization book is the game work configured with the target view mode according to the embodiment of the present application. If the historical view data is added and the trigger of the client has the function of switching the marketing, the client which synchronizes the client also has to calculate the view data of other marketing, modify the script and add the judgment step, and if the historical view data is calculated, the marketing shielding is not carried out. Specifically, all marketing data are taken out, vision shielding is added to marketing different from a client user, and the marketing vision data are not calculated on an engine layer.

Optionally, the method further includes: when the game works configured with the target view mode are operated, the current view data of the virtual character of the local camp in the game works are acquired, and the historical view data of the virtual character of the local camp is updated according to the current view data of the virtual character of the local camp.

For example, in updating the history view data, if it is the history view data of the editor map, the history view data is updated with the current view data after the calculation of the current view data is completed for each frame.

For example, as shown in fig. 12, when the target view field mode is the double-layer fog pattern, a non-view field that the player has already struggled to walk around is displayed as pale fog (view fog), which is referred to as a field a, a current view field of the player is referred to as a field B, and the other fields are referred to as fields C, where the field C is displayed as full black and the field C has topographic fog. The player only takes effect in controlling the concentration of the fog in the area A, the terrain decorations in the area A and the area B are visible, and the area C is completely black. And A | B is the area after exploration.

Specifically, when the fog-masking effect of the area a shown in fig. 11 is realized, because the field of view is not involved and the client-side expression is adopted, only the fog-masking texture is changed, the historical field of view area is calculated, and the historical field of view area is written into the corresponding fog-masking density, wherein the value range of the fog-masking density of the area a may be 0-255, where 0 represents full brightness and 255 represents full black. When the fog pattern of the area A is manufactured, firstly, a fog pattern color value is calculated, wherein the fog pattern color value can be a preset fog concentration value; then judging whether a fog pattern mode is started or not; if the fog pattern is started, detecting whether the current grid corresponding to the area A has historical view data, and if the current grid corresponding to the area A does not have the historical view data, setting the color value of the current grid corresponding to the area A to be 0; and if the current grid corresponding to the area A is detected to have historical view data, setting the color value of the current grid corresponding to the area A as the fog-masking texture color value.

Optionally, when generating and displaying a target view according to the acquired historical view data of the virtual character that has been banked by the own party in the game composition and the current view data of the virtual character controlled by the user, the method further includes:

prior to displaying the target view, performing speckle filtering on the target view.

Optionally, the performing speckle filtering processing on the target view includes:

acquiring a fog-masking texture color value of a previous frame and a fog-masking texture color value of a current frame of the target view;

and performing speckle filtering processing on the target view according to the previous frame fog masking texture color value, the current frame fog masking texture color value and a preset target value difference value.

In the process of processing the fog texture, shader code (fow _ blend) is generally adopted to blend the fog textures of the previous frame and the current frame, for example, a blend factor (blend _ factor) of the fog textures of the previous frame and the current frame is 0.05, and as understood, the color value of the current frame is finally obtained after iteration for n frames. However, in the processing process, fog-like texture stripes may be generated due to the precision, and the fog-like texture stripes may cause the visual field fog to generate inconsistent concentration, thereby affecting the fog-like view effect. For example, the color value of the R channel, the color value of the field of view region being 0.961, and the color value of the speckle region (historical field of view region) being 0.035, can be seen by the framing tool. The speckle color value is equal to 1.0f — field color value, and there is an error of about 0.0039.

To solve the fog-like texture speckle, speckle filtering is performed according to a pre-derived target value difference, for example, 0.03921. Specifically, whether speckle filtering is started or not can be detected firstly; if the speckle filtering is started, the fog masking texture color values of the previous frame and the current frame are mixed to obtain a first mixed value, the difference value of the fog masking texture color values of the previous frame and the current frame is calculated, if the difference value of the fog masking texture color values of the previous frame and the current frame is smaller than a target value difference value, the value is 0, if the difference value of the fog masking texture color values of the previous frame and the current frame is larger than or equal to the target value difference value, the value is a first mixed value, and the fog masking texture color values are output. And if the speckle filtering is not started, mixing the fog texture color values of the previous frame and the current frame according to the mixing factor to obtain a second mixed value, taking the value as the second mixed value, and outputting the fog texture color value.

Optionally, the method further includes: when the target map is generated, filtering the fog concentration of the target map according to a preset filtering rule, wherein the preset filtering rule comprises that if the fog concentration in the target view is smaller than a first threshold value, the fog concentration of the target map is reduced to a first concentration value, and if the fog concentration in the target view is larger than a second threshold value, the fog concentration of the target map is increased to a second concentration value.

The editor also comprises a shader (shade), and the maze concentration parameter is added through the shade. For example, parameters may be added to control the filtering of the fog concentration by modifying the shader code (fow _ blend. For example, for better effect, the national clothing minimap may be subjected to some filtering process to treat the fog concentration and the scene concentration as less consistent. Specifically, it is necessary to set parameters separately for national costume fighting and editor fighting. For example, the acquired fog dispersal concentration is filtered according to a preset filtering rule, and the effect after filtering is that if the fog dispersal concentration is less than 0.1f, 0.0f is taken out, if the fog dispersal concentration is greater than 0.6f, the value is expanded to 1.0f, namely the values less than 0.1f and greater than 0.6f are abandoned, and then the value between 0.1f and 0.6f is expanded to 0.0f to 1.0 f; the final effect is that if the concentration of the mist is not enough (less than 0.1f), the mist is directly considered to be not present; if the concentration is greater than 0.6f, it is directly considered to be fully dark, i.e., the concentration is full. For example, the preset filtering rules may be implemented by code:

for example, the parameter values can be found in the following table:

param value of national clothes Editor valuing
fow_clamp_para0 0.1f 0.0f
fow_clamp_para1 2.0f 1.0f

Wherein f represents a floating point number; fow _ clamp _ para0 and fow _ clamp _ para1 are values for concentration filtering, and if the values are 0.1f and 2.0f respectively, the concentration filtering of the code is performed; if the values are 0.0f and 1.0f, concentration filtration is not carried out.

Optionally, the method further includes: and when the game works configured with the target view mode are operated and the matched view change triggering information is acquired, triggering the dynamic change of the view mode of the game product to generate and display a target view with the dynamically changed view.

The method comprises the steps of configuring visual field change triggering information of a target visual field mode in advance through an editing trigger, and then triggering dynamic change of a visual field of the visual field mode when relevant information matched with the pre-configured visual field change triggering information generated in a game work is acquired in the running process of the game work.

Optionally, the method further includes: and when the game works configured with the target view mode are operated and a view mode switching instruction input by a user is acquired, switching the view mode of the game works according to the view mode switching instruction.

In the stage of creating the game works, a corresponding view mode switching instruction is set in the editor, so that a user can change the view mode of the game works by inputting the corresponding view mode switching instruction in the running process of the game works. For example, a numeral 1 is set as a view mode switching instruction corresponding to a terrain fog pattern; setting the number 2 as a visual field mode switching instruction corresponding to the visual field fog pattern; setting the number 3 as a visual field mode switching instruction corresponding to the double-layer fog pattern; the numeral 4 is set as a view mode switching instruction corresponding to the fog-free mode. For example, when a game piece initially runs, an initial visual field mode is a double-layer fog pattern mode, and in the running process of the game piece, when a visual field mode switching instruction input by a user is acquired to be 4, the visual field mode of the game piece is switched from the double-layer fog pattern mode to a fog-free mode according to the visual field mode switching instruction.

The method and the device for preventing the fog dispersal can achieve four visual field modes and functions related to the fog dispersal in the game works, wherein the four visual field modes comprise a visual field fog dispersal mode, a terrain fog dispersal mode, a double-layer fog dispersal mode and a fog dispersal-free mode, and the functions related to the fog dispersal can comprise overall fog dispersal concentration, fog dispersal concentration of a small map, unit visual field range, visibility and visual field blockage of scene decorations, visual field blocking of terrain and the like. The interface is provided in the editor and used for selecting a view field mode and setting related functions to realize static configuration of the fog pattern, and then the trigger in the editor is used for providing execution nodes and related parameters of logic related to the fog pattern, so that the view field mode is dynamically configured in the game running process. According to the embodiment of the application, a user can configure a high-degree-of-freedom view mode in a hand-game built-in editor, and the functions of freely selecting the view mode, configuring terrain fog, configuring view of a unit, configuring view blocking of scene ornaments, dynamically changing related configurations in the game process and the like are achieved.

All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.

According to the method for processing the visual field mode in the game, a graphical user interface is provided through an application which is deployed on terminal equipment and used for game creation, firstly, a visual field mode configuration triggering request is responded, a visual field mode configuration interface corresponding to the game work is triggered to be displayed on the graphical user interface, and the visual field mode configuration interface comprises a plurality of map visual field options; then responding to configuration editing information input on the game setting interface, and configuring the visual field mode of the game works to acquire a configured target visual field mode; sending the game work configured with the target view mode to a server so as to release the game work configured with the target view mode through the server; and when the game work configured with the target view mode is operated, obtaining view data of virtual characters in the game work, and generating and displaying a target view according to the view data and the target view mode. The embodiment of the application can flexibly and freely configure various visual field modes in the game creation process, and can dynamically change the relevant configuration of the visual field modes in the game running process.

In order to better implement the method for processing the view field pattern in the game according to the embodiment of the present application, the embodiment of the present application further provides a device for processing the view field pattern in the game. Referring to fig. 13, fig. 13 is a schematic structural diagram of a view mode processing device in a game according to an embodiment of the present application. The device is applied to a terminal device, the terminal device is deployed with an application for game creation, the terminal device provides a graphical user interface when running the application, and the in-game view mode processing device 300 includes:

the display module 301 is configured to trigger, in response to a view mode configuration trigger request, a view mode configuration interface corresponding to the game composition to be displayed on the graphical user interface, where the view mode configuration interface includes a plurality of map view options;

a configuration module 302, configured to configure a view mode of the game composition in response to configuration editing information input on the game setting interface, so as to obtain a configured target view mode;

a sending module 303, configured to send the game composition configured with the target view mode to a server, so as to publish, through the server, the game composition configured with the target view mode;

the processing module 304 is configured to, when the game composition configured with the target view mode is operated, acquire view data of a virtual character in the game composition, and generate and display a target view according to the view data and the target view mode.

Optionally, the configuration editing information includes a selection operation, and the configuration module 302 is configured to configure the view mode of the game composition in response to the selection operation of one or more view options acting on the game setting interface, so as to obtain a configured target view mode.

Optionally, the configuration editing information further includes a property editing operation, and the configuration module 302 is further configured to configure the property information of the target view mode in response to the property editing operation acting on the game setting interface.

Optionally, the attribute information of the target view mode includes at least one of:

a view blockage of the terrain;

blocking the view of the ornament;

visibility of the decoration;

a field of view of the unit;

the concentration of the dense fog;

the haze concentration of the minimap.

Optionally, the configuration editing information further includes an editing operation of a field-of-view change trigger information, and the configuration module 302 is further configured to configure the field-of-view change trigger information of the target field-of-view mode in response to the editing operation of the field-of-view change trigger information acting on the game setting interface.

Optionally, the configuration editing information includes configuration information of a view mode switching instruction, and the configuration module 302 is further configured to configure the view mode switching instruction of the game composition in response to the configuration information of the view mode switching instruction input on the game setting interface.

Optionally, the processing module 304 is further configured to, when the target view mode is a terrain fog pattern mode, generate and display a target view according to the acquired historical view data of the virtual character of the local party camp in the game composition, where the target view has terrain fog.

Optionally, the processing module 304 is further configured to, when the target view mode is a view fog pattern mode, generate and display a target view according to the acquired historical view data of the virtual character of the local team formation in the game composition and the current view data of the virtual character controlled by the user, where the target view has view fog.

Optionally, the processing module 304 is further configured to, when the target view mode is a double-layer fog pattern mode, generate and display a target view according to the acquired historical view data of the virtual character of the local team formation in the game piece and the current view data of the virtual character controlled by the user, where the target view has view fog and terrain fog.

Optionally, the processing module 304 is further configured to perform speckle filtering processing on the target view before displaying the target view.

Optionally, the processing module 304 is configured to perform speckle filtering processing on the target view, and specifically includes:

acquiring a fog-masking texture color value of a previous frame and a fog-masking texture color value of a current frame of the target view;

and performing speckle filtering processing on the target view according to the previous frame fog masking texture color value, the current frame fog masking texture color value and a preset target value difference value.

Optionally, the processing module 304 is further configured to, when the game work configured with the target view mode is operated, obtain current view data of a virtual character of the local camp in the game work, and update historical view data of the virtual character of the local camp according to the current view data of the virtual character of the local camp.

Optionally, the processing module 304 is further configured to, when the target map is generated, perform filtering processing on the target map according to a preset filtering rule, where the preset filtering rule includes that if the concentration of the fog in the target view is smaller than a first threshold, the concentration of the fog in the target map is reduced to a first concentration value, and if the concentration of the fog in the target view is larger than a second threshold, the concentration of the fog in the target map is increased to a second concentration value.

Optionally, the processing module 304 is further configured to, when the game work configured with the target view mode is operated, obtain current view data of a virtual character of the local camp in the game work, and update historical view data of the virtual character of the local camp according to the current view data of the virtual character of the local camp.

Optionally, the processing module 304 is further configured to trigger dynamic change of the view mode of the game product when the game work configured with the target view mode is run and the matched view change triggering information is acquired, so as to generate and display a target view with a dynamically changed view.

Optionally, the processing module 304 is further configured to switch the view mode of the game piece according to the view mode switching instruction when the game piece configured with the target view mode is operated and the view mode switching instruction input by the user is acquired.

All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.

The device 300 for processing the view mode in the game, provided by the embodiment of the application, is configured to trigger, by responding to a view mode configuration trigger request through a display module 301, a view mode configuration interface corresponding to the game composition to be displayed on the graphical user interface, where the view mode configuration interface includes a plurality of map view options; then the configuration module 302 responds to configuration editing information input on the game setting interface, and configures the view mode of the game works to acquire a configured target view mode; the sending module 303 sends the game composition configured with the target view mode to a server, so as to issue the game composition configured with the target view mode through the server; when the game composition configured with the target view mode is operated, the processing module 304 acquires view data of a virtual character in the game composition, and generates and displays a target view according to the view data and the target view mode. The embodiment of the application can flexibly and freely configure various visual field modes in the game creation process, and can dynamically change the relevant configuration of the visual field modes in the game running process.

Correspondingly, the embodiment of the present application further provides a terminal device, where the terminal device may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game console, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. As shown in fig. 14, fig. 14 is a schematic structural diagram of a terminal device provided in the embodiment of the present application. The terminal device 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer-readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. Those skilled in the art will appreciate that the terminal device configurations shown in the figures are not intended to be limiting of terminal devices and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

The processor 401 is a control center of the terminal device 400, connects various parts of the entire terminal device 400 by using various interfaces and lines, and performs various functions of the terminal device 400 and processes data by running or loading software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the terminal device 400.

In this embodiment, the processor 401 in the terminal device 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions:

responding to a visual field mode configuration triggering request, triggering a visual field mode configuration interface corresponding to the game composition to be displayed on the graphical user interface, wherein the visual field mode configuration interface comprises a plurality of map visual field options; responding to configuration editing information input on the game setting interface, and configuring the visual field mode of the game works to acquire a configured target visual field mode; transmitting the game work configured with the target view mode to a server to distribute the game work configured with the target view mode through the server; and when the game work configured with the target view mode is operated, obtaining view data of virtual characters in the game work, and generating and displaying a target view according to the view data and the target view mode.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Optionally, as shown in fig. 14, the terminal device 400 further includes: touch-sensitive display screen 403, radio frequency circuit 404, audio circuit 405, input unit 406 and power 407. The processor 401 is electrically connected to the touch display screen 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power source 407. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 14 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

The touch display screen 403 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. The display panel may be used, among other things, to display information input by or provided to the user and various graphical user interfaces of the terminal device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 401, and can receive and execute commands sent by the processor 401. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel may transmit the touch operation to the processor 401 to determine the type of the touch event, and then the processor 401 may provide a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 403 may also be used as a part of the input unit 406 to implement an input function.

In this embodiment of the application, a game application is executed by the processor 401 to generate a graphical user interface on the touch display screen 403, where the graphical user interface may display an editor and a corresponding setting interface in the editor, and is used for a user to select a view mode, set relevant attributes of the view mode, and the like; and displaying the relevant game picture when the game works are operated. The touch display screen 403 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.

The rf circuit 404 may be configured to transmit and receive rf signals to establish wireless communication with a network device or other terminal devices via wireless communication, and transmit and receive signals with the network device or other terminal devices.

The audio circuit 405 may be used to provide an audio interface between the user and the terminal device through a speaker, microphone. The audio circuit 405 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 405 and converted into audio data, and the audio data is processed by the audio data output processor 401, and then transmitted to another terminal device through the radio frequency circuit 404, or the audio data is output to the memory 402 for further processing. The audio circuit 405 may also include an earbud jack to provide communication of peripheral headphones with the terminal device.

The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.

The power supply 407 is used to supply power to the various components of the terminal device 400. Optionally, the power source 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 407 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.

Although not shown in fig. 13, the terminal device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, and the like, which are not described in detail herein.

In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

As can be seen from the above, the terminal device provided in this embodiment provides a graphical user interface through an application for game creation deployed on the terminal device, and first, in response to a view mode configuration trigger request, triggers a view mode configuration interface corresponding to the game composition to be displayed on the graphical user interface, where the view mode configuration interface includes a plurality of map view options; then responding to configuration editing information input on the game setting interface, and configuring the visual field mode of the game works to acquire a configured target visual field mode; sending the game work configured with the target view mode to a server so as to release the game work configured with the target view mode through the server; and when the game work configured with the target view mode is operated, obtaining view data of virtual characters in the game work, and generating and displaying a target view according to the view data and the target view mode. The embodiment of the application can flexibly and freely configure various visual field modes in the game creation process, and can dynamically change the relevant configuration of the visual field modes in the game running process.

It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.

To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in the visual field pattern processing method in any game provided by the embodiments of the present application. For example, the computer program may perform the steps of:

responding to a visual field mode configuration triggering request, triggering a visual field mode configuration interface corresponding to the game composition to be displayed on the graphical user interface, wherein the visual field mode configuration interface comprises a plurality of map visual field options; responding to configuration editing information input on the game setting interface, and configuring the visual field mode of the game works to acquire a configured target visual field mode; transmitting the game work configured with the target view mode to a server to distribute the game work configured with the target view mode through the server; and when the game work configured with the target view mode is operated, obtaining view data of virtual characters in the game work, and generating and displaying a target view according to the view data and the target view mode.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.

Since the computer program stored in the storage medium can execute the steps in any of the methods for processing a view mode in a game provided in the embodiments of the present application, beneficial effects that can be achieved by any of the methods for processing a view mode in a game provided in the embodiments of the present application can be achieved, for details, see the foregoing embodiments, and are not described herein again.

The foregoing describes in detail a method, an apparatus, a storage medium, and a terminal device for processing a view pattern in a game according to an embodiment of the present application, and a specific example is applied to explain the principle and an implementation of the present application, and the description of the foregoing embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:抽卡方法、抽卡系统及计算机设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类