Game map generation method, game testing method and related device

文档序号:1161888 发布日期:2020-09-18 浏览:21次 中文

阅读说明:本技术 一种游戏地图生成的方法、游戏测试的方法以及相关装置 (Game map generation method, game testing method and related device ) 是由 黄超 于 2020-06-10 设计创作,主要内容包括:本申请公开了一种基于人工智能技术实现的游戏地图生成方法、游戏测试方法及装置。本申请包括:从录制游戏样本中获取游戏样本图像以及游戏样本图像所对应的第一动作信息;根据游戏样本图像所包括的第一局部雷达地图生成第一掩膜图像;根据第一掩膜图像从第一局部雷达地图中获取第一局部地图;根据第一动作信息从拼接游戏地图中确定第一地图区域;将第一局部地图与第一地图区域进行匹配,根据匹配结果对拼接游戏地图进行更新;若拼接游戏地图的更新次数达到T次,则根据第T次更新后的拼接游戏地图生成全局游戏地图。本申请能够为大量游戏提供完整的游戏地图,从而便于确定AI角色在完整地图中的准确位置,有效地降低了游戏测试的局限性。(The application discloses a game map generation method, a game testing method and a game testing device based on an artificial intelligence technology. The application includes: obtaining a game sample image and first action information corresponding to the game sample image from the recorded game sample; generating a first mask image according to a first local radar map included in the game sample image; acquiring a first local map from the first local radar map according to the first mask image; determining a first map area from the spliced game map according to the first action information; matching the first local map with the first map area, and updating the splicing game map according to a matching result; and if the updating times of the spliced game map reach T times, generating the global game map according to the spliced game map updated for the T times. The method and the device can provide a complete game map for a large number of games, so that the accurate position of the AI role in the complete map can be conveniently determined, and the limitation of game testing is effectively reduced.)

1. A method of game map generation, comprising:

obtaining a game sample image and first action information corresponding to the game sample image from a recorded game sample, wherein the recorded game sample is a T-frame game image generated after a game role traverses a game scene, T is an integer greater than 1, and the game sample image comprises a first local radar map;

generating a first mask image from the first partial radar map included in the game sample image;

acquiring a first partial map from the first partial radar map according to the first mask image;

determining a first map area from the spliced game map according to the first action information;

matching the first local map with the first map area, and updating the spliced game map according to a matching result;

and if the updating times of the spliced game map reach T times, generating a global game map according to the spliced game map updated for the T times.

2. The method of claim 1, wherein generating a first mask image from the first partial radar map included in the game sample image comprises:

acquiring M binary images according to the first local radar map included in the game sample image, wherein M is an integer greater than or equal to 1;

acquiring binary images to be processed according to the M binary images;

performing negation operation on each pixel value in the binary image to be processed to obtain a target binary image;

and acquiring an intersection region corresponding to the target binary image from a preset binary image, and determining the intersection region as the first mask image.

3. The method of claim 2, wherein said obtaining M binary images from the first local radar map included in the game sample image comprises:

determining an object to be extracted according to the first local radar map included in the game sample image, wherein the object to be extracted corresponds to a red R channel threshold, a green G channel threshold and a blue B channel threshold;

acquiring an R channel image, a G channel image and a B channel image corresponding to the first local radar map;

acquiring a binary image corresponding to the R channel according to the R channel image and an R channel threshold corresponding to the object to be extracted;

acquiring a binary image corresponding to the G channel according to the G channel image and a G channel threshold corresponding to the object to be extracted;

acquiring a binary image corresponding to the B channel according to the B channel image and a B channel threshold corresponding to the object to be extracted;

and generating one binary image in the M binary images according to the binary image corresponding to the R channel, the binary image corresponding to the G channel and the binary image corresponding to the B channel.

4. The method of claim 1, wherein determining a first map region from a stitched game map based on the first action information comprises:

determining the starting position of the previous splicing from the spliced game map;

determining a first edge position and a second edge position of the first map area according to the starting position of the previous splicing and the first action information;

and generating the first map area according to the first edge position and the second edge position, wherein the first map area comprises a local map spliced at the previous time.

5. The method of claim 1, wherein obtaining a first partial map from the first partial radar map based on the first mask image comprises:

overlaying the first mask image on the first partial radar map, wherein the first mask image comprises an area associated with a game map;

extracting the first partial map from the first partial radar map according to the area related to the game map included in the first mask image.

6. The method of claim 1, wherein obtaining a first partial map from the first partial radar map based on the first mask image comprises:

corroding the first mask image to obtain a target mask image, wherein the first mask image comprises a first area related to a game map, the target mask image comprises a second area related to the game map, and the second area is smaller than the first area;

overlaying the target mask image on the first local radar map;

extracting the first partial map from the first partial radar map according to the second region included in the target mask image.

7. The method of claim 5 or 6, wherein the matching the first partial map with the first map region and updating the stitched game map according to a matching result comprises:

taking the first local map as a sliding window, and extracting K regions to be spliced from the first map region, wherein K is an integer greater than 1;

determining matching similarity corresponding to each to-be-spliced area in the K to-be-spliced areas to obtain K matching similarities;

determining the region to be spliced corresponding to the maximum value in the K matching similarities as a target splicing region;

and covering the first partial map in the target splicing area so as to update the spliced game map.

8. The method according to claim 1, wherein after generating a global game map according to the spliced game map updated for the T-th time if the number of updates of the spliced game map reaches T times, the method further comprises:

acquiring a second local radar map corresponding to the game image to be tested;

generating a second mask image according to the second local radar map;

acquiring a second local map from the second local radar map according to the second mask image;

and performing similarity matching on the second local map and the global game map, and acquiring the position information of the game role in the global game map according to the matching similarity.

9. The method of claim 8, wherein the similarity matching the second local map with the global game map and obtaining the position information of the game character in the global game map according to the matching similarity comprises:

taking the second local map as a sliding window, and extracting Q areas to be matched from the global game map, wherein Q is an integer greater than 1;

determining matching similarity corresponding to each to-be-matched area in the Q to-be-matched areas to obtain Q matching similarities;

determining a region to be matched corresponding to the maximum value in the Q matching similarity degrees as a target matching region, wherein the target matching region corresponds to a first abscissa and a first ordinate in the global game map;

determining a second abscissa according to the first abscissa corresponding to the target matching area and the width of the second local map, and determining a second ordinate according to the first ordinate corresponding to the target matching area and the height of the second local map;

and determining the position information of the game role in the global game map according to the second abscissa and the second ordinate.

10. The method of claim 9, wherein determining the position information of the game character in the global game map according to the second abscissa and the second ordinate comprises:

according to the width of the global game map, performing normalization processing on the second abscissa to obtain a third abscissa;

according to the height of the global game map, carrying out normalization processing on the second vertical coordinate to obtain a third vertical coordinate;

and generating the position information of the game role in the global game map according to the third abscissa and the third ordinate.

11. A method of game testing, comprising:

obtaining a local radar map corresponding to a game image to be tested;

generating a mask image according to the local radar map;

acquiring a local map from the local radar map according to the mask image;

carrying out similarity matching on the local map and a global game map, and acquiring the position information of the game role in the global game map according to the matching similarity, wherein the global game map is generated by adopting the method of any one of the claims 1 to 10;

and generating a game test result according to the position information.

12. A game map generation apparatus, comprising:

the game system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring game sample images and first action information corresponding to the game sample images from recorded game samples, the recorded game samples are T frame game images generated after a game role traverses a game scene, T is an integer larger than 1, and the game sample images comprise a first local radar map;

a generating module, configured to generate a first mask image according to the first local radar map included in the game sample image;

the acquisition module is further used for acquiring a first partial map from the first partial radar map according to the first mask image;

the determining module is used for determining a first map area from the spliced game map according to the first action information;

the processing module is used for matching the first local map with the first map area and updating the spliced game map according to a matching result;

and the generation module is also used for generating a global game map according to the spliced game map updated for the T times if the updating times of the spliced game map reach T times.

13. A game testing device, comprising:

the acquisition module is used for acquiring a local radar map corresponding to a game image to be tested;

the generating module is used for generating a mask image according to the local radar map;

the acquisition module is further used for acquiring a local map from the local radar map according to the mask image;

a processing module, configured to perform similarity matching on the local map and a global game map, and obtain location information of the game character in the global game map according to matching similarity, where the global game map is generated by using the method according to any one of claims 1 to 10;

the generating module is further used for generating a game test result according to the position information.

14. A computer device, comprising: a memory, a transceiver, a processor, and a bus system;

wherein the memory is used for storing programs;

the processor is configured to execute a program in the memory, the processor is configured to perform the method of any one of claims 1 to 10 or the method of claim 11 according to instructions in the program code;

the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.

15. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 10, or perform the method of claim 11.

Technical Field

The present application relates to the field of artificial intelligence, and in particular, to a method for generating a game map, a method for testing a game, and a related device.

Background

In recent years, Artificial Intelligence (AI) technology has been widely used in game testing, and an AI character for simulating a real player may be designed to improve the efficiency of game testing. During game testing, position information of the AI character in the complete map is usually required to be acquired, a moving path of the AI character can be set according to the position information, and a route passed by the AI character can also be recorded, so that more game scenes can be explored.

At present, a game testing method based on a manually set route is provided, which includes steps of firstly, marking key points of the route in a complete map, then identifying the current position of an AI character, and finally, realizing exploration of a game scene by controlling the AI character to sequentially move to the key points in the route based on the current position of the AI character.

The premise of realizing the game testing method is to acquire a complete game map, however, many games on the market do not have a complete game map at present, for example, some gunfight games only have a local radar map, which means that the accurate position of the AI character in the complete map cannot be acquired during testing, and the limitation of game testing is large.

Disclosure of Invention

The embodiment of the application provides a game map generation method, a game test method and a related device, which can provide a complete game map for a large number of games, so that the accurate position of an AI role in the complete map is convenient to determine, and the limitation of game test is effectively reduced.

In view of the above, an aspect of the present application provides a method for generating a game map, including:

obtaining game sample images and first action information corresponding to the game sample images from recorded game samples, wherein the recorded game samples are T frame game images generated after a game role traverses a game scene, T is an integer greater than 1, and the game sample images comprise a first local radar map;

generating a first mask image according to a first local radar map included in the game sample image;

acquiring a first local map from the first local radar map according to the first mask image;

determining a first map area from the spliced game map according to the first action information;

matching the first local map with the first map area, and updating the splicing game map according to the matching result;

and if the updating times of the spliced game map reach T times, generating the global game map according to the spliced game map updated for the T times.

Another aspect of the present application provides a method of game testing, including:

obtaining a local radar map corresponding to a game image to be tested;

generating a mask image according to the local radar map;

acquiring a local map from the local radar map according to the mask image;

carrying out similarity matching on the local map and the global game map, and acquiring the position information of the game role in the global game map according to the matching similarity, wherein the global game map is generated by adopting the method described in the aspect;

and generating a game test result according to the position information.

Another aspect of the present application provides a game map generation apparatus, including:

the game system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring game sample images and first action information corresponding to the game sample images from recorded game samples, the recorded game samples are T frame game images generated after a game role traverses a game scene, T is an integer greater than 1, and the game sample images comprise a first local radar map;

the generating module is used for generating a first mask image according to a first local radar map included in the game sample image;

the acquisition module is further used for acquiring a first local map from the first local radar map according to the first mask image;

the determining module is used for determining a first map area from the spliced game map according to the first action information;

the processing module is used for matching the first local map with the first map area and updating the splicing game map according to a matching result;

and the generation module is also used for generating a global game map according to the spliced game map updated for the T times if the updating times of the spliced game map reach T times.

In one possible design, in one implementation of another aspect of an embodiment of the present application,

the generating module is specifically used for acquiring M binary images according to a first local radar map included in the game sample image, wherein M is an integer greater than or equal to 1;

acquiring binary images to be processed according to the M binary images;

carrying out negation operation on each pixel value in the binary image to be processed to obtain a target binary image;

and acquiring an intersection region corresponding to the target binary image from the preset binary image, and determining the intersection region as a first mask image.

In one possible design, in another implementation of another aspect of an embodiment of the present application,

the acquisition module is specifically used for determining an object to be extracted according to a first local radar map included in the game sample image, wherein the object to be extracted corresponds to a red R channel threshold value, a green G channel threshold value and a blue B channel threshold value;

acquiring an R channel image, a G channel image and a B channel image corresponding to the first local radar map;

acquiring a binary image corresponding to an R channel according to the R channel image and an R channel threshold corresponding to an object to be extracted;

acquiring a binary image corresponding to a G channel according to the G channel image and a G channel threshold corresponding to an object to be extracted;

acquiring a binary image corresponding to a channel B according to the channel B image and a channel B threshold corresponding to an object to be extracted;

and generating one binary image in the M binary images according to the binary image corresponding to the R channel, the binary image corresponding to the G channel and the binary image corresponding to the B channel.

In one possible design, in another implementation of another aspect of an embodiment of the present application,

the determining module is specifically used for determining the starting position of the previous splicing from the spliced game map;

determining a first edge position and a second edge position of the first map area according to the initial position and the first action information of the previous splicing;

and generating a first map area according to the first edge position and the second edge position, wherein the first map area comprises a local map spliced at the previous time.

In one possible design, in another implementation of another aspect of an embodiment of the present application,

the acquisition module is specifically used for covering a first mask image on a first local radar map, wherein the first mask image comprises an area related to a game map;

the first partial map is extracted from the first partial radar map based on an area related to the game map included in the first mask image.

In one possible design, in another implementation of another aspect of an embodiment of the present application,

the acquisition module is specifically used for carrying out corrosion operation on the first mask image to obtain a target mask image, wherein the first mask image comprises a first area relevant to the game map, the target mask image comprises a second area relevant to the game map, and the second area is smaller than the first area;

covering the target mask image on the first local radar map;

and extracting the first partial map from the first partial radar map according to the second area included in the target mask image.

In one possible design, in another implementation of another aspect of an embodiment of the present application,

the processing module is specifically used for taking the first local map as a sliding window and extracting K regions to be spliced from the first map region, wherein K is an integer greater than 1;

determining matching similarity corresponding to each to-be-spliced area in the K to-be-spliced areas to obtain K matching similarities;

determining the region to be spliced corresponding to the maximum value in the K matching similarities as a target splicing region;

and covering the first local map in the target splicing area so as to update the spliced game map.

In one possible design, in another implementation of another aspect of an embodiment of the present application,

the acquisition module is further used for acquiring a second local radar map corresponding to the game image to be tested after generating a global game map according to the spliced game map updated for the T times if the updating times of the spliced game map reach T times;

the generating module is further used for generating a second mask image according to the second local radar map;

the acquisition module is further used for acquiring a second local map from the second local radar map according to the second mask image;

and the processing module is also used for matching the similarity of the second local map with the global game map and acquiring the position information of the game role in the global game map according to the matching similarity.

In one possible design, in another implementation of another aspect of an embodiment of the present application,

the processing module is specifically used for taking the second local map as a sliding window and extracting Q areas to be matched from the global game map, wherein Q is an integer greater than 1;

determining matching similarity corresponding to each to-be-matched area in the Q to-be-matched areas to obtain Q matching similarities;

determining a region to be matched corresponding to the maximum value in the Q matching similarity degrees as a target matching region, wherein the target matching region corresponds to a first abscissa and a first ordinate in the global game map;

determining a second abscissa according to the first abscissa corresponding to the target matching area and the width of the second local map, and determining a second ordinate according to the first ordinate corresponding to the target matching area and the height of the second local map;

and determining the position information of the game role in the global game map according to the second abscissa and the second ordinate.

In one possible design, in another implementation of another aspect of an embodiment of the present application,

the determining module is specifically used for carrying out normalization processing on the second abscissa according to the width of the global game map to obtain a third abscissa;

according to the height of the global game map, carrying out normalization processing on the second vertical coordinate to obtain a third vertical coordinate;

and generating the position information of the game role in the global game map according to the third abscissa and the third ordinate.

Another aspect of the present application provides a game testing apparatus, comprising:

the acquisition module is used for acquiring a local radar map corresponding to a game image to be tested;

the generating module is used for generating a mask image according to the local radar map;

the acquisition module is also used for acquiring a local map from the local radar map according to the mask image;

the processing module is used for matching the similarity of the local map and the global game map and acquiring the position information of the game role in the global game map according to the matching similarity, wherein the global game map is generated by adopting the method described in the aspect;

and the generating module is also used for generating a game test result according to the position information.

Another aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when executed on a computer, cause the computer to perform the method of the above-described aspects.

According to the technical scheme, the embodiment of the application has the following advantages:

in the embodiment of the application, a method for generating a game map is provided, which includes firstly obtaining a game sample image and first action information corresponding to the game sample image from a recorded game sample, the recorded game sample is a multi-frame game image generated after a game role traverses a game scene, the game sample image comprises a first local radar map, then generating a first mask image according to the first local radar map included in the game sample image, acquiring the first local map from the first local radar map according to the first mask image, and determining a first map area from the stitched game map according to the first action information, and finally matching the first local map with the first map area, and updating the spliced game map according to the matching result until a global game map is obtained, wherein the global game map is used for determining the position information of the game role. Through the mode, the local radar maps of each frame of game images are extracted from the recorded game samples obtained after the game scenes are traversed, the multiframe local radar maps are spliced by means of template matching after noise interference is taken out of the local radar maps, and finally the global game map is generated.

Drawings

FIG. 1 is a schematic view of a scene based on a gun battle game in an embodiment of the present application;

FIG. 2 is a schematic diagram of an embodiment of a local radar map in an embodiment of the present application;

FIG. 3 is a schematic flow chart illustrating a game map generation method according to an embodiment of the present application;

FIG. 4 is a schematic diagram of an embodiment of a game map generation method in the embodiment of the present application;

FIG. 5 is a schematic diagram of an embodiment of a game sample image in an embodiment of the present application;

FIG. 6 is a diagram of an embodiment of first action information in an embodiment of the present application;

FIG. 7 is a schematic view of an embodiment of a first mask image in an embodiment of the present application;

FIG. 8 is a schematic view of an embodiment of a first partial map of an embodiment of the present application;

FIG. 9 is a schematic diagram of an embodiment of a first map region in an embodiment of the present application;

FIG. 10 is a schematic diagram of an embodiment of a global game map in an embodiment of the present application;

FIG. 11 is a schematic diagram of an embodiment of a binary image in an embodiment of the present application;

FIG. 12 is a schematic diagram of an embodiment of a binary image to be processed in an embodiment of the present application;

FIG. 13 is a schematic diagram of an embodiment of a target binary image in an embodiment of the present application;

FIG. 14 is a schematic diagram of one embodiment of determining a first mask image in an embodiment of the present application;

FIG. 15 is a schematic view of an embodiment of a first edge position and a second edge position in an embodiment of the present application;

FIG. 16 is a schematic view of another embodiment of the first edge position and the second edge position in the embodiment of the present application;

FIG. 17 is a diagram of an embodiment of extracting a first local map in the embodiment of the present application;

FIG. 18 is a schematic diagram of an embodiment of extracting a second region in the embodiment of the present application;

FIG. 19 is a schematic diagram of another embodiment of extracting a first partial map in the embodiment of the present application;

FIG. 20 is a schematic diagram of an embodiment of determining location information in an embodiment of the present application;

FIG. 21 is a schematic diagram of an embodiment of a method for game testing in the embodiment of the present application;

FIG. 22 is a schematic diagram of an embodiment of a game map generation apparatus according to the embodiment of the present application;

FIG. 23 is a schematic view of an embodiment of a game testing apparatus according to the embodiment of the present application;

FIG. 24 is a schematic structural diagram of a server in an embodiment of the present application;

fig. 25 is a schematic structural diagram of a terminal device in the embodiment of the present application.

Detailed Description

The embodiment of the application provides a game map generation method, a game test method and a related device, which can provide a complete game map for a large number of games, so that the accurate position of an AI role in the complete map is convenient to determine, and the limitation of game test is effectively reduced.

The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

It should be understood that the embodiments of the present application may be applied to various Game scenes requiring Game map generation or Game testing, and the games in the present scheme may include, but are not limited to, gunfight games, running games, Racing games, Multiplayer Online tactical sports games (MOBA), Racing games (RCG), and sports games (SPG). By adopting the game map generation method provided by the application, a complete game map can be generated based on the local radar map in various game scenes, and the position of the current game role in the game map is detected according to the game map. According to the game testing method, the position of the game role is used as Artificial Intelligence (AI) role input during game testing, the AI role can be prevented from passing through repeated paths based on the position, more scenes can be searched by the AI role, and probability of triggering game bugs (bugs) can be increased by searching more game scenes in game testing, for example, whether the game is broken down or stuck can be tested. Based on the method, the AI role can move according to the position of the AI role in the complete game map and the preset path, so that the positions of the map which is walked are recorded, more game scenes are explored, more useful data are provided by the subsequent abnormal detection, and the feasibility of game testing is improved.

Specifically, the description is given by taking the application to a gun battle game as an example, and for convenience of understanding, please refer to fig. 1, which is a scene schematic diagram based on the gun battle game in the embodiment of the present application, as shown in the figure, information such as a life value, a killing number, game equipment, teammate information, and a local radar map of a manipulated game character can be seen in a gun battle game scene, where a1 is used to indicate a local radar map, and a gun battle game player can obtain a position where a current game character is located and a corresponding view angle thereof according to the local radar map. Further, referring to fig. 2, fig. 2 is a schematic diagram of an embodiment of a local radar map in an embodiment of the present application, as shown in the drawing, B1 is used for indicating a game Character operated by a Player in a gunfight game, and B2 and B3 are both used for indicating teammates of the Player, it is understood that a Non-Player Character (NPC) and the like may also be displayed on the local radar map. The position relationship between the game character and the teammates within a certain small range can be displayed in the local radar map. The white sector area in front of the game character represents the view area of the game character. In gunfight games, a gunfight game map is traversed in a manner of manually controlling game characters, and corresponding game samples are recorded at the same time. Based on each frame of game image in the recorded game sample, the interference caused by game roles, visual angles and friend roles in the local radar map is removed, and then the local radar maps processed by multiple frames are combined into a complete gun battle game map. In the game testing process, the local map can be matched in the complete gun battle game map by adopting a template matching mode, so that the position of the game role in the complete gun battle game map is determined, and after the position information of the game role is obtained, the position information is used as the input of the AI role, so that the AI role runs or explores the map according to a specific path, and the limitation of the game testing is effectively reduced.

The method is applied to a game map generation system, and is explained by taking the application to gunfight games as an example, the game map generation system comprises a server and a terminal device, it needs to be explained that a game map generation device can be deployed in the server or the terminal device, and the game map generation device is explained by taking the deployment in the server as an example in the application. The server related to the application can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, safety service, Content Delivery Network (CDN), big data and an artificial intelligence platform. The terminal device may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.

Based on this, a method for generating a game map will be described below, please refer to fig. 3, where fig. 3 is a schematic flow chart of a game map generating method in an embodiment of the present application, and as shown in the figure, specifically:

in step S1, the terminal device records a game video as a recorded game sample, and then sends the recorded game sample to the server, where the recorded game sample includes a plurality of frames of game images generated after a game character traverses a game scene, and each frame of game image includes a local radar map;

in step S2, the server generates a mask image according to the local radar map included in the recorded game sample, where the mask image is composed of black pixels and white pixels, where the black pixels form a background portion, the white pixels form a foreground portion, and the foreground portion corresponds to the map to be extracted.

In step S3, the server extracts a local map from the local radar map based on the mask image, where the local map corresponds to the foreground portion of the mask image.

In step S4, the server determines the starting position of the previous mosaic from the mosaic game map, and generates a map area according to the starting position of the previous mosaic and the action information of the game character, wherein the map area is an artificially defined area and is used as a template for subsequent matching.

In step S5, the server matches the local map with the map area, updates the spliced game map according to the matching result, and generates the global game map according to the updated spliced game map when the number of updates of the spliced game map is equal to the number of game image frames included in the recorded game sample.

In step S6, in the game testing process, the server receives the game image to be tested fed back by the terminal device, performs similarity matching between the game image to be tested and the global game map, obtains the position information of the game character in the global game map according to the matching similarity, and generates the game testing result corresponding to the frame of game image to be tested based on the position information.

It should be understood that the game map generation method and the game test method provided by the present application both relate to the AI technology, and some basic concepts of the AI technology will be described below. AI is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, AI is an integrated technique of computer science that attempts to understand the essence of intelligence and produces a new intelligent machine that can react in a manner similar to human intelligence. AI is to study the design principles and implementation methods of various intelligent machines, so that the machine has the functions of perception, reasoning and decision making.

The AI technology is a comprehensive subject, and relates to the field of extensive technology, both hardware level technology and software level technology. The AI base technologies generally include technologies such as sensors, dedicated AI chips, cloud computing, distributed storage, big data processing technologies, operating/interactive systems, mechatronics, and the like. The AI software technology mainly includes several directions, such as computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.

Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build AI systems that can acquire information from images or multidimensional data. The computer vision technology generally includes image processing, image Recognition, image semantic understanding, image retrieval, Optical Character Recognition (OCR), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also includes common biometric technologies such as face Recognition and fingerprint Recognition.

In the scheme provided in the embodiment of the present application, which relates to technologies such as artificial intelligence computer vision and machine learning, and in combination with the above description, the following describes a method for generating a game map in the present application, and in the present embodiment, an application to a gunfight game is taken as an example for description, please refer to fig. 4, where fig. 4 is a schematic diagram of an embodiment of a method for generating a game map in the embodiment of the present application, and as shown in the drawing, an embodiment of a method for generating a game map in the embodiment of the present application includes:

101. obtaining game sample images and first action information corresponding to the game sample images from recorded game samples, wherein the recorded game samples are T frame game images generated after a game role traverses a game scene, T is an integer greater than 1, and the game sample images comprise a first local radar map;

in this embodiment, the game image generating device may obtain the game sample recorded by the terminal device, that is, obtain the recorded game sample, for example, the terminal device records a gun battle game to generate the recorded game sample, and then feeds back the recorded game sample to the game image generating device. Optionally, the game image generation means may also extract recorded game samples uploaded by the player from the game database. It should be noted that the recorded game samples in the present application include T frames of game images (T is an integer greater than 1) generated after a game character traverses a game scene, and each frame of game image includes a local radar map. Taking the recording frequency of 1 frame per second for the gun battle type game as an example for description, assuming that the game duration is 10 minutes, the recorded game samples include 600 game images, and each game image includes a local radar map, that is, 600 local radar maps, it is understood that in practical applications, the game samples may also be recorded at a frequency of 2 frames per second or 5 frames per second, and this is not limited herein.

For convenience of introduction, the description is given by taking a frame of game image (i.e., game sample image) in the recorded game sample as an example, and it can be understood that other game images in the recorded game sample can be processed in a similar manner, and thus details are not repeated herein. It should be noted that the game image generation device is disposed in the terminal device, and may also be disposed in the server, and the game image generation device is described in this application by taking the example where the game image generation device is disposed in the server, but this should not be construed as a limitation to this application.

The game image generating device acquires a game sample image from a recorded game sample and first action information corresponding to the game sample image, wherein the first action information can be represented by an action identifier. In particular, in gunfight games, it is often necessary to control a game character to move in a certain direction through a joystick region. Assuming that the joystick area is divided into 8 fan-shaped areas with the same size, in the game recording process, by clicking the joystick area to record the corresponding action identifier, each frame of game sample image corresponds to an action, for example, if a game character in a certain frame of game sample image does not execute an action, the first action information is represented as "0". When the game character in a certain frame of the game sample image moves forward, the first operation information is represented as "1". When the game character in a certain frame of the game sample image moves to the right front, the first action information is expressed as "2". When the game character in a certain frame of the game sample image moves to the right, the first operation information is represented as "3". When the game character in a certain frame of the game sample image moves to the lower right, the first motion information is represented as "4". When the game character in a certain frame of the game sample image moves to the right lower side, the first action information is represented as "5". When the game character in a certain frame of the game sample image moves to the lower left, the first motion information is represented as "6". When the game character in a certain frame of the game sample image moves to the right left, the first operation information is represented as "7". When the game character in a certain frame of the game sample image moves to the upper left, the first motion information is represented as "8".

It should be understood that 9 types of motion information are defined in the present application, that is, 8 moving directions are included and no motion is included, and in practical applications, other numbers and types of motion information may be designed according to the game category and the game content, which is only an illustration here and should not be construed as a limitation to the present application.

For easy understanding, referring to fig. 5, fig. 5 is a schematic view showing an example of a game sample image in an embodiment of the present application, and as shown in the drawing, based on the diagram (a) in fig. 5, C1 is used to indicate a first local radar map, and information related to a gun battle game, such as a life value, a number of hits, game equipment, teammate information, and the first local radar map of a game character can be seen in the game sample image. Based on the diagram (B) in fig. 5, C2 is used to indicate a game character manipulated by a player in the first local radar map, C3 and C4 are both used to indicate teammates of the game character, the local radar map may reflect map information included in a circle having a certain step (for example, 50 pixels) as a radius centered on the game character, and the first local radar map may further include a position where the game character is located in the game, a position where the teammates are located in the game, and a sector area in front of the game character, where the sector area in front of the game character indicates a viewing angle area of the game character.

For the convenience of understanding, the joystick area is divided into 8 sectors of the same size as an example for explanation, please refer to fig. 6, fig. 6 is a schematic diagram of an embodiment of the first action information in the embodiment of the present application, and as shown in fig. 6, based on the diagram (a), D1 is used for indicating the joystick, and the game character can be controlled to move towards a certain direction by sliding or clicking the joystick. Based on the diagram (B) in fig. 6, D2 is used to indicate the joystick area, and different motion indicators, i.e., motion information, can be obtained for different movement motions. If the game character does not move in the game sample image, the motion information corresponding to the game sample image is "0". When the game character moves straight ahead in the game sample image, the motion information corresponding to the game sample image is "1". When the game character moves to the right front side in the game sample image, the motion information corresponding to the game sample image is "2". When the game character moves in the straight right direction in the game sample image, the motion information corresponding to the game sample image is "3". When the game character moves to the right rear side in the game sample image, the motion information corresponding to the game sample image is "4". When the game character moves in the straight backward direction in the game sample image, the motion information corresponding to the game sample image is "5". When the game character moves to the left and back in the game sample image, the motion information corresponding to the game sample image is "6". When the game character moves to the right left in the game sample image, the motion information corresponding to the game sample image is "7". When the game character moves to the right front side in the game sample image, the motion information corresponding to the game sample image is "8".

It should be noted that, in practical applications, the division of the areas and the corresponding of the numbers may also be performed according to the requirements of specific games, the foregoing examples are only used for understanding the present solution, and the specific action information should be flexibly determined in combination with the actual requirements.

102. Generating a first mask image according to a first local radar map included in the game sample image;

in this embodiment, based on the first partial radar map included in the game sample image, the game image generation device needs to remove noise from the first partial radar map to obtain the first mask image.

Specifically, based on the first local radar map shown in fig. 5 (B), the front of the game character indicated by C2 has a white sector area, which is the image noise to be removed, the image noise refers to unnecessary or redundant interference information present in the image data, and the white sector area in front of the game character, the arrow corresponding to the game character, the teammate arrow indicated by C3, and the teammate arrow indicated by C4 are all interference information and therefore need to be removed. For easy understanding, please refer to fig. 7, where fig. 7 is a schematic view of an embodiment of a first mask image in an embodiment of the present application, and as shown in the drawing, the first partial radar map shown in fig. 7 (a) includes an arrow corresponding to one game character and arrows corresponding to two teammates, and after performing binary processing and pixel inversion on the first partial radar map, the first mask image shown in fig. 7 (B) can be obtained. It should be noted that the example in fig. 7 is only used for understanding the present solution, and a specific first mask image should be flexibly determined in combination with an actual situation of the first local radar map, and a generation manner of other mask images is similar to that of the first mask image, which is not described herein again.

103. Acquiring a first local map from the first local radar map according to the first mask image;

in this embodiment, after obtaining the first mask image, the game image generation device may acquire the first partial map from the first partial radar map based on the first mask image.

Specifically, based on a white area in the first mask image, a map image included in the white area is acquired from the first partial radar map, thereby generating the first partial map. For easy understanding, please refer to fig. 8, where fig. 8 is a schematic view of an embodiment of a first local map in the embodiment of the present application, as shown in fig. 8 (a), a first local radar map is shown, and fig. 8 (B) shows a first mask image, where the first mask image includes a white area indicated by E1, and based on the white area indicated by E1 in the first mask image, the first local map shown in fig. 8 (C) may be extracted from the first local radar map. The example of fig. 8 is only used for understanding the present solution, and the specific first partial map should be flexibly determined in combination with the actual situation of the first partial radar map and the first mask image.

104. Determining a first map area from the spliced game map according to the first action information;

in this embodiment, a 1280 × 1280 background image is preset, and then the obtained first local map may be pasted to a position slightly below the middle of the background image, so as to obtain a spliced game map generated by first splicing, and then the moving direction of the game character is determined by the first action information, so as to determine the first map area in the spliced game map. For convenience of understanding, the first action information is "1" (i.e., moving to the front) as an example, please refer to fig. 9, fig. 9 is a schematic view of an embodiment of the first map area in the embodiment of the present application, as shown in the drawing (a) in fig. 9, a spliced game map is shown, and since the first action information indicates moving to the front, an image obtained by moving to the front area in the spliced game map is the first map area, so that the first map area shown in the drawing (B) in fig. 9 is obtained. The example of fig. 9 is only used for understanding the present solution, and the specific first map area should be flexibly determined in combination with the first action information and the actual situation of the mosaic game map.

105. Matching the first local map with the first map area, and updating the splicing game map according to the matching result;

in this embodiment, the game image generation means may match the first partial map and the first map region and obtain a matching result that may indicate a position where the first map region is most highly matched with the first partial map, and then paste the first partial map at the position where the first partial map is most highly matched with the first map region, so that the spliced game map may be updated according to the matching result.

106. And if the updating times of the spliced game map reach T times, generating the global game map according to the spliced game map updated for the T times.

In this embodiment, after the game image generation device updates the splicing game map according to the matching result, the update times may be recorded, and when the update times of the splicing game map is equal to the number of game image frames included in the recorded game sample (T times), it indicates that the local map of all game images in the recorded game sample has been obtained, and matching with the corresponding map area is completed, so that after the splicing game map updated for the T time, the splicing game map includes the entire game map in the game, and a global game map may be generated according to the splicing game map, and thus, in the game test process, the game image to be tested may be subjected to similarity matching with the global game map, and position information of a game character (or an AI character) in the global game map may be obtained according to the matching score.

For easy understanding, please refer to fig. 10, where fig. 10 is a schematic diagram of an embodiment of a global game map in an embodiment of the present application, and as shown in the figure, a complete game map of the game, i.e., the global game map, can be obtained by matching local maps of T game images in recorded game samples. The example of fig. 10 is only used for understanding the present solution, and the specific global game map should be flexibly determined in combination with the actual situation of the mosaic game map.

According to the method, the local radar maps of each frame of game images are extracted from the recorded game samples obtained after the game scene is traversed, the local radar maps are spliced by means of template matching after noise interference is taken out of the local radar maps, and finally the global game map is generated.

Optionally, on the basis of the embodiment corresponding to fig. 4, in an optional embodiment provided in the embodiment of the present application, the generating the first mask image according to the first local radar map included in the game sample image may include the following steps:

acquiring M binary images according to a first local radar map included in a game sample image, wherein M is an integer greater than or equal to 1;

acquiring binary images to be processed according to the M binary images;

carrying out negation operation on each pixel value in the binary image to be processed to obtain a target binary image;

and acquiring an intersection region corresponding to the target binary image from the preset binary image, and determining the intersection region as a first mask image.

In this embodiment, a method for generating a first mask image is described, in which a game sample image includes a first local radar map, and as can be seen from the first local radar map shown in fig. 5 (B), the first local radar map includes a white sector area in front of a game character, a white sector area behind the game character, an arrow corresponding to the game character, and an arrow corresponding to a teammate, for the game character, the white sector area in front of the game character, the arrow corresponding to the game character, and the arrow corresponding to the teammate all belong to interference information in the game map, and the white sector area behind the game character is valid information. The interference information in the game map may include three types, i.e., a white sector area, an arrow corresponding to a game character, and an arrow corresponding to a teammate, and thus M may be set to 3, where M indicates a type corresponding to the interference information. Assuming that only the white sector area in front of the game character and the arrow corresponding to the game character are included in the first local radar map, M may be set to 2, and thus it can be seen that the value of M is related to the number of categories of the interference information in the first local radar map.

Specifically, for the first local radar map shown in the diagram (B) in fig. 5, a binary image corresponding to a white sector area in front of a game character, a binary image of an arrow corresponding to the game character, and a binary image of an arrow corresponding to a teammate, in which an area having a value of 1 (i.e., white dots) in the binary image represents the white sector area in front of an object, the arrow corresponding to the game character, and the arrow corresponding to the teammate, and the other areas have a value of 0 (i.e., black dots) in the binary image, can be obtained. For easy understanding, referring to fig. 11, fig. 11 is a schematic view of an embodiment of a binary image in an embodiment of the present application, as shown in fig. 11, (a) illustrates a binary image of a white sector area in front of a game character, (B) illustrates a binary image of an arrow corresponding to the game character in fig. 11, and (C) illustrates a binary image of an arrow corresponding to a teammate in fig. 11. After performing an or operation on three binary images corresponding to the image (a), the image (B), and the image (C) in fig. 11, a to-be-processed binary image can be obtained, where the or operation indicates that, for each pixel point in a corresponding position in the M binary images, when a value corresponding to at least one pixel point is "1", the pixel point value in the corresponding position is determined to be "1", and only when each pixel point value in the corresponding position in the M binary images is "0", the pixel point value in the corresponding position is determined to be "0".

Referring to fig. 12 based on the description of fig. 11, fig. 12 is a schematic view of an embodiment of a binary image to be processed in the embodiment of the present application, as shown in the figure, a white area in the binary image to be processed corresponds to a white sector area in front of a game character, an arrow corresponding to the game character, and an arrow corresponding to a teammate, since the arrow corresponding to one teammate is located in the white sector area in front of the game character, after performing an or operation on the part of the binary image, the part of the binary image is merged with the white sector area in front of the game character.

Further, in order to remove interference factors irrelevant to the game map in the binary image to be processed, the binary image to be processed needs to be subjected to an inversion operation, and then the target binary image is obtained. Referring to fig. 13, fig. 13 is a schematic diagram of an embodiment of a target binary image in an embodiment of the present application, and as shown in the diagram, an inversion operation is performed on each pixel value in the binary image to be processed in fig. 12, so that the target binary image shown in fig. 13 is obtained. In order to extract a region with the same size as the local radar map, a preset binary image may be further set, an intersection region corresponding to the target binary image is acquired from the preset binary image, and the intersection region is determined as the first mask image. The preset binary image can be a circular binary image, the pixel point of which the median value is 1 (white) in the preset binary image is the area of the first local radar map, then the intersection of the preset binary image and the target binary image is taken, so that a first mask image can be obtained, the area of which the first mask image is 1 (white) corresponds to the game map, and the area of which the pixel point value is 0 (black) represents an irrelevant interference factor. For convenience of understanding, referring to fig. 14, fig. 14 is a schematic diagram of an embodiment of determining a first mask image in an embodiment of the present application, as shown in fig. 14, a target binary image is illustrated in fig. 14 (a), a preset binary image is illustrated in fig. 14 (B), and the first mask image illustrated in fig. 14 (C) can be obtained by intersecting the target binary image and the preset binary image, where a white area in the first mask image represents a game map related area.

It is to be understood that the foregoing examples in the figures are only for the understanding of the present solution, and the specific first mask image should be flexibly determined in combination with the actual situation.

In the embodiment of the application, a method for generating a first mask image is provided, and through the above manner, a binary image is taken from various types of effective information in a first local radar map, and a negation operation is performed on each pixel value in the binary image to obtain a target binary image, and finally, the first mask image is determined according to an intersection region of the target binary image in a preset binary image, so that image information included in the first mask image does not have interference information any more, and thus, the accuracy of the first mask image is increased, and the accuracy of a more accurate global game image is obtained.

Optionally, on the basis of the embodiment corresponding to fig. 4, in another optional embodiment provided by the embodiment of the present application, acquiring M binary images according to the first local radar map included in the game sample image may include the following steps:

determining an object to be extracted according to a first local radar map included in the game sample image, wherein the object to be extracted corresponds to a red R channel threshold value, a green G channel threshold value and a blue B channel threshold value;

acquiring an R channel image, a G channel image and a B channel image corresponding to the first local radar map;

acquiring a binary image corresponding to an R channel according to the R channel image and an R channel threshold corresponding to an object to be extracted;

acquiring a binary image corresponding to a G channel according to the G channel image and a G channel threshold corresponding to an object to be extracted;

acquiring a binary image corresponding to a channel B according to the channel B image and a channel B threshold corresponding to an object to be extracted;

and generating one binary image in the M binary images according to the binary image corresponding to the R channel, the binary image corresponding to the G channel and the binary image corresponding to the B channel.

In this embodiment, a method for obtaining a binary image based on a channel threshold is described, where a game sample image includes a first local radar map, and the first local radar map includes a white sector area in front of a game character, an arrow corresponding to the game character, and an arrow corresponding to a teammate, and it can be understood that in practical application, an arrow corresponding to an NPC may also be included. And the contents belong to unnecessary or redundant interference information, so the game image generation device determines an object to be extracted through the first local radar map and generates a binary image according to the object to be extracted. For convenience of introduction, the generation of one binary image is described in the present application as an example, and it can be understood that the generation manners of M binary images are similar, and the difference is different from the object to be extracted, and the generation is not limited herein.

Specifically, an object to be extracted is taken as an arrow corresponding to a game character as an example for introduction, the first local radar map includes images of three channels, Red Green Blue (RGB), and therefore, corresponding binary images are respectively extracted based on RGB channels, and assuming that the arrow corresponding to the game character is a blue arrow, an R channel threshold, a G channel threshold, and a B channel threshold corresponding to the blue arrow are obtained, so that based on the R channel image corresponding to the first local radar map and the R channel threshold corresponding to the arrow corresponding to the game character, a binary image corresponding to the R channel is obtained, if a pixel point of the R channel image is greater than the R channel threshold, the pixel point is recorded as 1, if the pixel point of the R channel image is less than or equal to the R channel threshold, the pixel point is recorded as 0, and similarly, a binary image corresponding to the G channel can also be obtained, and finally, taking the intersection of the binary image corresponding to the R channel, the binary image corresponding to the G channel and the binary image corresponding to the B channel to obtain the binary image corresponding to the object to be extracted.

Specifically, the specific formula for determining the binary image is as follows:

M=(R>Rthresh)&(G>Gthresh)&(B>Bthresh);

wherein M represents a binary image, R represents an image corresponding to the R channel, and R representsthreshRepresenting a threshold corresponding to the R channel, G representing an image corresponding to the G channel, GthreshRepresenting the threshold value corresponding to the G channel, B representing the image corresponding to the B channel, BthreshRepresenting the threshold corresponding to the B channel.

In the embodiment of the application, a method for obtaining a binary image based on a channel threshold is provided, and by the above manner, binary images corresponding to RGB channels are respectively obtained through an RGB channel threshold corresponding to an object to be extracted and an RGB channel threshold corresponding to a first local radar map, and by analogy, M binary images can be generated, so that the accuracy of the determined binary image is improved, and a first mask image is further obtained according to the M binary images, so that the feasibility of the scheme is improved.

Optionally, on the basis of the embodiment corresponding to fig. 4, in another optional embodiment provided by the embodiment of the present application, determining the first map area from the mosaic game map according to the first action information may include the following steps:

determining the starting position of the previous splicing from the spliced game map;

determining a first edge position and a second edge position of the first map area according to the initial position and the first action information of the previous splicing;

and generating a first map area according to the first edge position and the second edge position, wherein the first map area comprises a local map spliced at the previous time.

In this embodiment, a method for determining a first map area is described, where a game image generation device may determine a start position of a previous frame of game image mosaic from a mosaic game map, that is, determine a start position of a previous mosaic, and then learn a movement direction of a game character based on first action information through first action information and the start position, thereby determining a first edge position and a second edge position of the first map area, and then generate the first map area according to the first edge position and the second edge position, and according to the first edge position and the second edge position. In the present application, it is assumed that the game character can move 50 pixels at most in each game image, and if the first action information indicates that the game character moves toward the front side, the start position is moved 50 pixels forward. If the first motion information indicates that the game character moves towards the right back, the starting position is moved backwards by 50 pixels, and it can be understood that, here, the 50 pixels are set according to the moving distance of the game character within one frame, therefore, in practical application, the value of the specifically moved pixel should be flexibly determined according to the moving distance of the game character within one frame.

Specifically, if the first action indicates that the game character moves straight ahead, straight behind, straight to the right, moving to the upper right, or moving to the lower right, it may be determined that the start position of the previous stitch may be left, above, or below the current position, thereby determining that the first edge position of the first map area is the bottom-most portion and the second edge position is the left-most portion. If the first action indicates that the game character is moving directly forward, moving directly backward, moving directly to the left, moving to the upper left, or moving to the lower left, it may be determined that the starting position of the previous stitch may be to the right, above, or below the current position, thereby determining that the first edge position of the first map area is the bottommost portion and the second edge position is the rightmost portion. For easy understanding, please refer to fig. 15, in which fig. 15 is a schematic diagram of an embodiment of a first edge position and a second edge position in the present embodiment, as shown in fig. 15, (a) illustrates a case where the first edge position is the bottommost portion of the first map area and the second edge position is the leftmost portion of the first map area, and (B) illustrates a case where the first edge position is the bottommost portion of the first map area and the second edge position is the rightmost portion of the first map area.

If the first action indicates that the game character moves towards the right, the first edge position may be the bottommost portion of the first map area, and the second edge position is the leftmost portion of the first map area, please refer to fig. 16, fig. 16 is a schematic view of another embodiment of the first edge position and the second edge position in the embodiment of the present application, as shown in fig. 16, (a) illustrates a case where the first edge position is the bottommost portion of the first map area, and the second edge position is the leftmost portion of the first map area, and it is assumed that the game character moves towards the right by 50 pixels, so that the first map area as illustrated in fig. 16 (B) can be obtained. After the edge of the first map region is overlapped with the starting position of the previous splicing, the matching position of the local map of the next frame can be determined. It is to be understood that the examples of fig. 15 and 16 are only for understanding the present solution, and the specific first map region should be flexibly determined in combination with the actual situation.

In the embodiment of the application, a method for determining a first map area is provided, and in the above manner, a first edge position and a second edge position are determined by determining a starting position and first action information of a previous splice, and thus a first map area is generated.

Optionally, on the basis of the embodiment corresponding to fig. 4, in another optional embodiment provided by the embodiment of the present application, the obtaining, by the game image generation apparatus, the first partial map from the first partial radar map according to the first mask image may include:

overlaying a first mask image on the first local radar map, wherein the first mask image comprises an area related to the game map;

the first partial map is extracted from the first partial radar map based on an area related to the game map included in the first mask image.

In this embodiment, a method for extracting a first local map is introduced, and since the first mask image includes a foreground region, the first mask image may be covered on the first local radar map, so that an image related to a game map is extracted by using the foreground region, and the first local map is obtained.

Specifically, referring to fig. 17, fig. 17 is a schematic view of an embodiment of extracting a first local map in the embodiment of the present application, as shown in fig. 17, (a) illustrates the first local radar map, F1 is used to indicate an area behind a game character, (B) illustrates the first mask image in fig. 17, and F2 is used to indicate an area related to the game map, that is, a white area, included in the first mask image, and after the first mask image is overlaid on the first local radar map, a corresponding image area can be extracted from the first local radar map based on the first mask image, so as to obtain the first local map as shown in fig. 17 (C).

In the embodiment of the application, a method for extracting a first local map is provided, and by the method, the local map can be extracted through the mask image and the local radar map, and the local map is used as a basis for subsequent matching, so that the feasibility and operability of the scheme are improved.

Optionally, on the basis of the embodiment corresponding to fig. 4, in another optional embodiment provided by the embodiment of the present application, acquiring the first local map from the first local radar map according to the first mask image may include the following steps:

corroding the first mask image to obtain a target mask image, wherein the first mask image comprises a first area related to the game map, the target mask image comprises a second area related to the game map, and the second area is smaller than the first area;

covering the target mask image on the first local radar map;

and extracting the first partial map from the first partial radar map according to the second area included in the target mask image.

In this embodiment, a method for performing a corrosion operation on a first mask region is introduced, and after a first mask image is obtained, a corrosion operation may be performed on the first mask image to reduce the size of a foreground region (i.e., a first region) in the first mask image, so as to obtain a target mask image, where the target mask image includes a foreground region (i.e., a second region) related to a game map, but the second region is smaller than the first region due to the corrosion operation.

Specifically, the erosion operation may be to reduce the first area of the first mask image by 50 pixels (the value of the pixel is determined according to the specific situation of the game), or to extract a circle, a square, a rectangle, or another polygon from the first area as the second area. For easy understanding, please refer to fig. 18, fig. 18 is a schematic diagram of an embodiment of extracting a second region in the embodiment of the present application, and as shown in fig. 18, (a) is a diagram illustrating a first mask image, wherein G1 is used to indicate that a first region included in the first mask image, i.e., a white region, corresponds to a first region associated with a game map. If the reduction operation of the inward 50 pixels is performed on the first region in the first mask image, the target mask image as illustrated in (B) of fig. 18 may be obtained, and G2 is used to indicate the second region included in the target mask image, and it is seen that the area of the second region is smaller than that of the first region. Alternatively, an arbitrary region may be taken out from the first region as the second region, that is, the target mask image as illustrated in (C) of fig. 18, where G3 is used to indicate the second region in the target mask image, and it can be seen that the area of the second region is smaller than that of the first region.

The target mask image is overlaid on the first partial radar map, and the first partial map is extracted from the first partial radar map according to a second area included in the target mask image. For easy understanding, referring to fig. 19, fig. 19 is a schematic diagram of another embodiment of extracting a first local map in the embodiment of the present application, as shown in fig. 19, where (a) illustrates the first local radar map, H1 indicates an area behind a game character, fig. 19 (B) illustrates a target mask image, and H2 indicates a second area included in the target mask image, that is, a white area. After the target mask image is overlaid on the first partial radar map, based on the second area included in the target mask image, a corresponding image area may be extracted from the first partial radar map, so that the first partial map as illustrated in (C) in fig. 19 may be obtained.

In the embodiment of the application, a method for carrying out corrosion operation on a first mask region is provided, in the above mode, a target mask image is obtained after the mask image is subjected to corrosion operation, a local map is extracted through the target mask image and the local radar map, and due to the fact that the mask image is subjected to corrosion operation, pixel points of a region related to a game map are reduced, so that the data volume of pixel point matching is reduced, processing resources are saved, the efficiency of local map generation is improved, and the efficiency of global game map generation is improved.

Optionally, on the basis of the embodiment corresponding to fig. 4, in another optional embodiment provided by the embodiment of the present application, matching the first local map with the first map area, and updating the piecing game map according to the matching result may include the following steps:

taking the first local map as a sliding window, and extracting K regions to be spliced from the first map region, wherein K is an integer greater than 1;

determining matching similarity corresponding to each to-be-spliced area in the K to-be-spliced areas to obtain K matching similarities;

determining the region to be spliced corresponding to the maximum value in the K matching similarities as a target splicing region;

and covering the first local map in the target splicing area so as to update the spliced game map.

In this embodiment, a method for updating a mosaic game map is described, in which a game image generation device uses a first local map as a sliding window, sequentially extracts K regions to be mosaiced from the first map region, and then performs similarity matching between the K regions to be mosaiced and the first local map. Specifically, for example, similarity matching is performed on a region to be spliced and a first local map, first, a pixel point value in the region to be spliced is subtracted from a corresponding pixel point in the first local map, then, an absolute value is obtained through calculation based on a difference value of the pixel points, the absolute value can be used as a matching score, the matching score is inversely proportional to the matching similarity, that is, the matching score is smaller, the matching similarity is indicated to be larger, and conversely, the matching score is larger, the matching similarity is indicated to be smaller. By analogy, the matching similarity corresponding to each to-be-spliced area in the K to-be-spliced areas and the first local map can be obtained, and then the K matching similarities are obtained. And finally, taking the area to be spliced corresponding to the maximum value in the K matching similarities as a target splicing area, and covering the first local map on the target splicing area so as to update the spliced game map.

For ease of understanding, the method of calculating the match score will be described below in one example. Assuming that the first local map includes four pixels, RGB values of each pixel are (100, 100, 100), (100, 150, 150), (100, 200, 200) and (200, 200, 200), the first local map is taken as a sliding window to extract 2 regions to be stitched, namely, a region to be stitched a and a region to be stitched B, wherein RGB values of each pixel in the region to be stitched a are (100, 110, 105), (100, 148, 152), (100, 190, 205) and (195, 195, 195), and RGB values of each pixel in the region to be stitched B are (80, 120, 120), (80, 180, 180), (80, 240, 240) and (180, 235, 235), respectively. For the area a to be stitched, the pixel point value at the corresponding position is subtracted from the pixel point value at the corresponding position in the first local map to obtain (0, 10, 5), (0, 2, 2), (0, 10, 5), and (5, 5, 5), so that adding all the values can obtain 49, i.e. the matching score is 49. For the area B to be stitched, (20, 20, 20), (20, 30, 30), (20, 40, 40) and (20, 35, 35) can be obtained by subtracting the pixel point value at the corresponding position from the pixel point value at the corresponding position in the first local map, so that 330 can be obtained by adding all the values, that is, the matching score is 330. Therefore, the matching score of the area A to be spliced is smaller, namely the matching similarity is larger, so that the area A to be spliced is determined as a target splicing area, and the first local map is covered on the target splicing area (namely the area A to be spliced) so as to update the spliced game map.

In the embodiment of the application, a method for updating a spliced game map is provided, and through the above manner, similarity matching is performed between the area to be spliced and the local map, the area to be spliced corresponding to the maximum value in the matching similarity is determined as a target splicing area, the local map is covered in the target splicing area, so that the spliced game map is updated, and the similarity is higher if the matching similarity is higher, which indicates that the obtained spliced game map is closer to the actual game map, thereby improving the accuracy of the spliced game map.

Optionally, on the basis of the embodiment corresponding to fig. 4, in another optional embodiment provided in the embodiment of the present application, if the update times of the spliced game map reach T times, after the global game map is generated according to the spliced game map updated for the T time, the method for generating the game map may further include the following steps:

acquiring a second local radar map corresponding to the game image to be tested;

generating a second mask image according to the second local radar map;

acquiring a second local map from the second local radar map according to the second mask image;

and performing similarity matching on the second local map and the global game map, and acquiring the position information of the game role in the global game map according to the matching similarity.

In this embodiment, a method for determining position information corresponding to a game character is introduced, where if the update times of the stitched game map reaches T times, a game test may be started after a global game map is generated according to the stitched game map updated for the T times, for example, a shooting of a gun battle game being performed is performed to obtain a game image to be tested, and a frame of game image to be tested may also be extracted from stored game images. And the game image to be tested is provided with a corresponding second local radar map. The manner of obtaining the second local radar map is similar to that of obtaining the first local radar map in the foregoing embodiment, and details are not repeated here.

Specifically, since the game image to be tested includes the second local radar map, and the second local radar map includes the interference information, it is also necessary to remove the interference information, so as to generate the second mask image, and a manner of generating the second mask image is similar to that of generating the first mask image in the foregoing embodiment, and is not described herein again. After the second mask image is obtained, the second mask image may be directly covered on the second local radar map to obtain a second local map, or the second mask image may be covered on the second local radar map after being corroded to obtain the second local map, and a manner of obtaining the second local map is similar to that of obtaining the first local map in the foregoing embodiment, and details are not repeated here. And performing similarity matching on the second local map and the global game map, determining the maximum value in the similarity according to the matching score, wherein the larger the matching score is, the smaller the matching similarity corresponding between the second local map and the global game map is, and the smaller the matching score is, the larger the matching similarity corresponding between the second local map and the global game map is, so that the area to be matched corresponding to the maximum value of the matching similarity is taken as a target matching area, and then acquiring the horizontal coordinate and the vertical coordinate of the game role in the target matching area, thereby acquiring the position information of the game role in the global game map.

In the embodiment of the application, a method for determining position information corresponding to a game role is provided, and through the method, a global game map can provide a complete game map, a second local radar map of each frame of game image is obtained after noise interference is taken out of the second local radar map, and the second local map is subjected to similarity matching with the global game map, so that the accuracy of the position information is improved.

Optionally, on the basis of the embodiment corresponding to fig. 4, in another optional embodiment provided in the embodiment of the present application, similarity matching is performed between the second local map and the global game map, and the position information of the game character in the global game map is obtained according to the matching similarity, which may include the following steps:

taking the second local map as a sliding window, and extracting Q areas to be matched from the global game map, wherein Q is an integer greater than 1;

determining matching similarity corresponding to each to-be-matched area in the Q to-be-matched areas to obtain Q matching similarities;

determining a region to be matched corresponding to the maximum value in the Q matching similarity degrees as a target matching region, wherein the target matching region corresponds to a first abscissa and a first ordinate in the global game map;

determining a second abscissa according to the first abscissa corresponding to the target matching area and the width of the second local map, and determining a second ordinate according to the first ordinate corresponding to the target matching area and the height of the second local map;

and determining the position information of the game role in the global game map according to the second abscissa and the second ordinate.

In this embodiment, a method for determining location information based on matching similarity is introduced, in which a second local map is used as a sliding window, and the second local map is sequentially slid according to a certain step length (for example, 1 pixel point) from a global game map, so as to extract Q regions to be matched, and then similarity matching is performed between each region to be matched and the second local map. Taking similarity matching between a region to be matched and a second local map as an example, subtracting corresponding pixel points in the second local map from pixel points in the region to be matched, and calculating to obtain an absolute value based on differences of the pixel points, wherein the absolute value can be used as a matching score, the smaller the matching score is, the greater the matching similarity is, and otherwise, the greater the matching score is, the smaller the matching similarity is. Therefore, the minimum value needs to be selected from the Q matching scores, that is, the maximum value of the matching similarity is obtained, and the region to be matched corresponding to the maximum value of the matching similarity is determined as the target matching region.

Specifically, the target matching area corresponds to a first abscissa and a first ordinate in the global game map, and the game character is located at the center position of the second local map, and therefore, the second abscissa (i.e., the abscissa of the game character in the global game map) is determined by the width of the second local map and the first abscissa of the target matching area (i.e., the abscissa of the upper left vertex of the target matching area in the global game map), and the second ordinate (i.e., the ordinate of the game character in the global game map) is determined by the height of the second local map and the first ordinate of the target matching area (i.e., the ordinate of the upper left vertex of the target matching area in the global game map). For easy understanding, referring to fig. 20, fig. 20 is a schematic diagram of an embodiment of determining location information in the embodiment of the present application, as shown in the figure, I1 is used to indicate a global game map, I2 is used to indicate a target matching area, I3 is used to indicate a first abscissa of the target matching area in the global game map, I4 is used to indicate that the target matching area corresponds to the first ordinate in the global game map, I5 is used to indicate a second local map, and a game character is in the center of the second local map. The coordinate distance from the game character to the left boundary line of the second local map is I6, and the coordinate distance from the game character to the upper boundary line of the second local map is I7, so that the second abscissa of the game character is I3+ I6, and the second ordinate is I4+ I7, that is, the position information of the game character in the global game map is (I3+ I6, I4+ I7).

Specifically, it is assumed that the first abscissa is 30, the first ordinate is 15, and the height of the second local map is 20 and the width is 20, whereby the second abscissa is 40 and the second ordinate is 25, and therefore the position information of the game character in the global game map is (40, 25). It is to be understood that the foregoing examples are only for the understanding of the present solution, and the specific location information should be flexibly determined in combination with the actual situation.

In the embodiment of the application, a method for determining position information based on matching similarity is provided, and through the above manner, the area to be matched corresponding to the maximum value in the matching similarity can be used as a target matching area, and as the matching similarity is higher, the obtained target matching area is closer to an actual game map, so that the accuracy of the position information is improved.

Optionally, on the basis of the embodiment corresponding to fig. 4, in another optional embodiment provided by the embodiment of the present application, the determining, by the game image generating apparatus, the position information of the game character in the global game map according to the second abscissa and the second ordinate may include:

the game image generation device normalizes the second abscissa according to the width of the global game map to obtain a third abscissa;

the game image generation device carries out normalization processing on the second vertical coordinate according to the height of the global game map to obtain a third vertical coordinate;

the game image generation device generates position information of the game character in the global game map based on the third abscissa and the third ordinate.

In this embodiment, a processing method for normalizing the abscissa and the ordinate is described, and in order to improve the accuracy of the position information, after the second abscissa and the second ordinate are acquired, the game image generation device may further perform normalization processing on the second abscissa and the second ordinate. Specifically, the second abscissa is normalized according to the width of the global game map to obtain a third abscissa, the second ordinate is normalized according to the height of the global game map to obtain a third ordinate, and then the position information is generated according to the third abscissa and the third ordinate obtained through normalization.

For ease of understanding, the width of the global game map is 200, and the height of the global game map is 150 as an example. Assuming that the second abscissa is 100 and the second ordinate is 30, the third abscissa obtained after the normalization process of the second abscissa according to the width of the global game map is 0.5, and the third ordinate obtained after the normalization process of the second ordinate according to the height of the global game map is 0.2, so that the position information of the game character in the global game map is (0.5, 0.2). Further, assuming that the second abscissa is 80 and the second ordinate is 100, the third abscissa obtained after the normalization processing of the second abscissa according to the width of the global game map is 0.4, and the third ordinate obtained after the normalization processing of the second ordinate according to the height of the global game map is 0.67(0.66666 takes two decimal places), so that the position information of the game character in the global game map is (0.40, 0.67). It should be appreciated that the foregoing example is only for understanding the present solution, and that the normalized specific location information should be flexibly determined in combination with the reality of the width and height of the global game map.

In the embodiment of the application, a processing method for normalizing the abscissa and the ordinate is provided, and by the above method, the abscissa and the ordinate corresponding to the game character are normalized, so that the accuracy of the abscissa and the ordinate is improved, the accuracy loss is reduced, and the accuracy of the position information is improved.

With reference to the above description, a method for testing a game in the present application will be described below, and the present embodiment is described by taking an application to a gun battle game as an example, please refer to fig. 21, where fig. 21 is a schematic diagram of an embodiment of a method for testing a game in the embodiment of the present application, and as shown in the drawing, the embodiment of the method for testing a game in the embodiment of the present application includes:

201. obtaining a local radar map corresponding to a game image to be tested;

in this embodiment, the game testing device may record the image screenshot of the gun battle game in progress, so as to obtain the game image to be tested, or obtain the game image to be tested from the storage of the game testing device. According to the embodiment, the game images in the gun battle game comprise the local radar maps, so that the acquired game images to be tested also comprise the local radar maps. The specific implementation of obtaining the local radar map may refer to the embodiment corresponding to fig. 4, which is not described herein again.

It should be noted that the game testing device may be disposed in a server or a terminal device, and the game testing device is disposed in the server in this application as an example, which should not be construed as a limitation to this application.

202. Generating a mask image according to the local radar map;

in this embodiment, the game image to be tested includes a local radar map, and the local radar map may include a white sector area in front of the game character, a white sector area behind the game character, an arrow corresponding to the game character, and an arrow corresponding to a teammate, but these contents all belong to interference information, and therefore, these interference information need to be removed.

Specifically, a white sector area in front of a game role, an arrow corresponding to the game role and an RGB channel threshold corresponding to an arrow corresponding to a teammate are respectively obtained, an R channel image, a G channel image and a B channel image corresponding to a local radar map are obtained, the white sector area in front of the game role, the arrow corresponding to the game role and a binary image corresponding to the arrow corresponding to the teammate are respectively obtained according to the R channel image and the corresponding R channel threshold, and then an OR operation is performed on the binary images, so that a binary image to be processed is obtained. In order to remove interference factors irrelevant to the game map in the binary image to be processed, the binary image to be processed needs to be subjected to negation operation to obtain a target binary image. And then acquiring an intersection region corresponding to the target binary image from the preset binary image, and determining the intersection region as a mask image, thereby generating the mask image.

It can be understood that the thresholds corresponding to the white sector area in front of the game character, the arrow corresponding to the game character and the arrow corresponding to the teammate are different, and the specific threshold is flexibly determined according to the actual situation. In addition, the specific implementation of determining the mask image is similar to that described in fig. 4 and the corresponding embodiment, and is not repeated herein.

203. Acquiring a local map from the local radar map according to the mask image;

in this embodiment, the game testing apparatus may directly cover the mask image on the local radar map to extract the local map, or may cover the mask image after corroding the mask image on the local radar map to extract the local map.

Specifically, the game testing apparatus may further perform an erosion operation on the mask image, and the erosion operation may be processed in the manner as described in the foregoing embodiments, which is not limited herein. The mask image after corrosion treatment can reduce the corresponding foreground area, so that the data volume of pixel point matching is reduced, the processing resource is saved, the efficiency of local map generation is improved, and the efficiency of global game map generation is improved. The specific implementation of obtaining the local map is described in fig. 4 and its corresponding embodiment, and is not described herein again.

204. Carrying out similarity matching on the local map and the global game map, and acquiring the position information of the game role in the global game map according to the matching similarity, wherein the global game map is acquired by adopting the game map generation method in the embodiment;

in this embodiment, the game testing apparatus matches the local map with the global game map, for example, the local map is slid on the global game map by using 1 pixel point as a step length, a matching score is calculated after each sliding, and finally a minimum value is selected from the matching scores, wherein a larger matching score indicates a smaller matching similarity corresponding between the local map and the global game map, and a smaller matching score indicates a larger matching similarity corresponding between the local map and the global game map, that is, the matching similarity is inversely proportional to the matching score, so that the minimum value of the matching score is selected, that is, the maximum value of the matching similarity is selected. And taking the area to be matched corresponding to the maximum value (namely the minimum value in the matching scores) in the matching similarity as a target matching area, and then acquiring the abscissa and the ordinate of the game role in the target matching area, thereby obtaining the position information of the game role in the global game map.

205. And generating a game test result according to the position information.

In this embodiment, the game testing apparatus records the position information corresponding to the game image to be tested, and may compare the position information corresponding to several adjacent game images, and if the AI character should move but the position information does not change, a jam may occur, so as to generate a game testing result of the game image to be tested, for example, "a jam may occur".

In the embodiment of the application, a game testing method is provided, and through the above manner, the global game map can provide a complete game map, the local radar map of each frame of game image is obtained after the local radar map is taken out of noise interference, and the local map is subjected to similarity matching with the global game map, so that the accurate position of an AI role in the complete map is determined conveniently, and the limitation of game testing is effectively reduced.

Referring to fig. 22, fig. 22 is a schematic view of an embodiment of a game map generating device according to an embodiment of the present application, and as shown in the drawing, the game map generating device 30 includes:

the acquisition module 301 is configured to acquire a game sample image and first action information corresponding to the game sample image from a recorded game sample, where the recorded game sample is a T-frame game image generated after a game role traverses a game scene, T is an integer greater than 1, and the game sample image includes a first local radar map;

a generating module 302, configured to generate a first mask image according to a first local radar map included in the game sample image;

the obtaining module 301 is further configured to obtain a first local map from the first local radar map according to the first mask image;

the determining module 303 is configured to determine a first map area from the mosaic game map according to the first action information;

the processing module 304 is configured to match the first local map with the first map area, and update the spliced game map according to a matching result;

and the generation module is also used for generating a global game map according to the spliced game map updated for the T times if the updating times of the spliced game map reach T times.

Optionally, on the basis of the embodiment corresponding to the above 1, in another embodiment of the game map generation apparatus 30 provided in the embodiment of the present application,

a generating module 302, configured to obtain M binary images according to a first local radar map included in a game sample image, where M is an integer greater than or equal to 1;

acquiring binary images to be processed according to the M binary images;

carrying out negation operation on each pixel value in the binary image to be processed to obtain a target binary image;

and acquiring an intersection region corresponding to the target binary image from the preset binary image, and determining the intersection region as a first mask image.

Alternatively, on the basis of the embodiment corresponding to fig. 22, in another embodiment of the game map generation apparatus 30 provided in the embodiment of the present application,

the obtaining module 301 is specifically configured to determine an object to be extracted according to a first local radar map included in the game sample image, where the object to be extracted corresponds to a red R channel threshold, a green G channel threshold, and a blue B channel threshold;

acquiring an R channel image, a G channel image and a B channel image corresponding to the first local radar map;

acquiring a binary image corresponding to an R channel according to the R channel image and an R channel threshold corresponding to an object to be extracted;

acquiring a binary image corresponding to a G channel according to the G channel image and a G channel threshold corresponding to an object to be extracted;

acquiring a binary image corresponding to a channel B according to the channel B image and a channel B threshold corresponding to an object to be extracted;

and generating one binary image in the M binary images according to the binary image corresponding to the R channel, the binary image corresponding to the G channel and the binary image corresponding to the B channel.

Alternatively, on the basis of the embodiment corresponding to fig. 22, in another embodiment of the game map generation apparatus 30 provided in the embodiment of the present application,

a determining module 303, configured to determine a starting position of a previous mosaic from the mosaic game map;

determining a first edge position and a second edge position of the first map area according to the initial position and the first action information of the previous splicing;

and generating a first map area according to the first edge position and the second edge position, wherein the first map area comprises a local map spliced at the previous time.

Alternatively, on the basis of the embodiment corresponding to fig. 22, in another embodiment of the game map generation apparatus 30 provided in the embodiment of the present application,

the obtaining module 301 is specifically configured to overlay a first mask image on a first local radar map, where the first mask image includes an area related to a game map;

the first partial map is extracted from the first partial radar map based on an area related to the game map included in the first mask image.

Alternatively, on the basis of the embodiment corresponding to fig. 22, in another embodiment of the game map generation apparatus 30 provided in the embodiment of the present application,

the obtaining module 301 is specifically configured to perform a corrosion operation on a first mask image to obtain a target mask image, where the first mask image includes a first area related to the game map, the target mask image includes a second area related to the game map, and the second area is smaller than the first area;

covering the target mask image on the first local radar map;

and extracting the first partial map from the first partial radar map according to the second area included in the target mask image.

Alternatively, on the basis of the embodiment corresponding to fig. 22, in another embodiment of the game map generation apparatus 30 provided in the embodiment of the present application,

the processing module 304 is specifically configured to take the first local map as a sliding window, and extract K regions to be stitched from the first map region, where K is an integer greater than 1;

determining matching similarity corresponding to each to-be-spliced area in the K to-be-spliced areas to obtain K matching similarities;

determining the region to be spliced corresponding to the maximum value in the K matching similarities as a target splicing region;

and covering the first local map in the target splicing area so as to update the spliced game map.

Alternatively, on the basis of the embodiment corresponding to fig. 22, in another embodiment of the game map generation apparatus 30 provided in the embodiment of the present application,

the obtaining module 301 is further configured to, if the update times of the spliced game map reach T times, obtain a second local radar map corresponding to the game image to be tested after generating a global game map according to the spliced game map updated for the T times;

the generating module 302 is further configured to generate a second mask image according to the second local radar map;

the obtaining module 301 is further configured to obtain a second local map from the second local radar map according to the second mask image;

the processing module 304 is further configured to perform similarity matching between the second local map and the global game map, and obtain location information of the game character in the global game map according to the matching similarity.

Alternatively, on the basis of the embodiment corresponding to fig. 22, in another embodiment of the game map generation apparatus 30 provided in the embodiment of the present application,

the processing module 304 is specifically configured to take the second local map as a sliding window, and extract Q to-be-matched regions from the global game map, where Q is an integer greater than 1;

determining matching similarity corresponding to each to-be-matched area in the Q to-be-matched areas to obtain Q matching similarities;

determining a region to be matched corresponding to the maximum value in the Q matching similarity degrees as a target matching region, wherein the target matching region corresponds to a first abscissa and a first ordinate in the global game map;

determining a second abscissa according to the first abscissa corresponding to the target matching area and the width of the second local map, and determining a second ordinate according to the first ordinate corresponding to the target matching area and the height of the second local map;

and determining the position information of the game role in the global game map according to the second abscissa and the second ordinate.

Alternatively, on the basis of the embodiment corresponding to fig. 22, in another embodiment of the game map generation apparatus 30 provided in the embodiment of the present application,

the determining module 303 is specifically configured to perform normalization processing on the second abscissa according to the width of the global game map to obtain a third abscissa;

according to the height of the global game map, carrying out normalization processing on the second vertical coordinate to obtain a third vertical coordinate;

and generating the position information of the game role in the global game map according to the third abscissa and the third ordinate.

Referring to fig. 23, fig. 23 is a schematic view of an embodiment of a game testing device according to the present application, and as shown in the drawing, the game testing device 40 includes:

an obtaining module 401, configured to obtain a local radar map corresponding to a game image to be tested;

a generating module 402, configured to generate a mask image according to the local radar map;

the obtaining module 401 is further configured to obtain a local map from the local radar map according to the mask image;

the processing module 403 is configured to perform similarity matching between the local map and the global game map, and obtain location information of the game role in the global game map according to the matching similarity, where the global game map is obtained by using any one of the game map generation methods in the embodiments of the application;

the generating module 402 is further configured to generate a game test result according to the location information.

The embodiment of the present application further provides another game map generating apparatus and a game testing apparatus, where the game map generating apparatus and the game testing apparatus may be disposed in a computer device, where the computer device may be a terminal device or a server, and for example, the computer device is taken as a server, refer to fig. 24, fig. 24 is a schematic structural diagram of a server provided in the embodiment of the present application, and the server 500 may generate a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 522 (e.g., one or more processors) and a memory 532, and one or more storage media 530 (e.g., one or more mass storage devices) storing an application program 542 or data 544. Memory 532 and storage media 530 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 522 may be configured to communicate with the storage medium 530, and execute a series of instruction operations in the storage medium 530 on the server 500.

The server 500 may also include one or more power supplies 526, one or more wired or wireless network interfaces 550, one or more input-output interfaces 558, and/or one or moreUpper operating system 541, e.g., Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMAnd so on.

The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 24.

Taking a computer device as an example of a terminal device, as shown in fig. 25, for convenience of description, only a part related to the embodiment of the present application is shown, and details of the method are not disclosed, please refer to the method part of the embodiment of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the terminal device as the mobile phone as an example:

fig. 25 is a block diagram illustrating a partial structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 25, the cellular phone includes: radio Frequency (RF) circuit 610, memory 620, input unit 630, display unit 640, sensor 650, audio circuit 660, wireless fidelity (WiFi) module 670, processor 680, and power supply 690. Those skilled in the art will appreciate that the handset configuration shown in fig. 25 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

The following describes each component of the mobile phone in detail with reference to fig. 25:

the RF circuit 610 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 680; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 610 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 610 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.

The memory 620 may be used to store software programs and modules, and the processor 680 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 620. The memory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.

The input unit 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 630 may include a touch panel 631 and other input devices 632. The touch panel 631, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 631 or near the touch panel 631 by using any suitable object or accessory such as a finger or a stylus) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 680, and can receive and execute commands sent by the processor 680. In addition, the touch panel 631 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 630 may include other input devices 632 in addition to the touch panel 631. In particular, other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.

The display unit 640 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 640 may include a display panel 641, and optionally, the display panel 641 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 631 can cover the display panel 641, and when the touch panel 631 detects a touch operation thereon or nearby, the touch panel is transmitted to the processor 680 to determine the type of the touch event, and then the processor 680 provides a corresponding visual output on the display panel 641 according to the type of the touch event. Although in fig. 25, the touch panel 631 and the display panel 641 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 631 and the display panel 641 may be integrated to implement the input and output functions of the mobile phone.

The handset may also include at least one sensor 650, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 641 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 641 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.

Audio circuit 660, speaker 661, and microphone 662 can provide an audio interface between a user and a cell phone. The audio circuit 660 may transmit the electrical signal converted from the received audio data to the speaker 661, and convert the electrical signal into an audio signal through the speaker 661 for output; on the other hand, the microphone 662 converts the collected sound signals into electrical signals, which are received by the audio circuit 660 and converted into audio data, which are processed by the audio data output processor 680 and then transmitted via the RF circuit 610 to, for example, another cellular phone, or output to the memory 620 for further processing.

WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 670, and provides wireless broadband Internet access for the user. Although fig. 25 shows the WiFi module 670, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.

The processor 680 is a control center of the mobile phone, and connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory 620, thereby performing overall monitoring of the mobile phone. Optionally, processor 680 may include one or more processing units; optionally, the processor 680 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 680.

The handset also includes a power supply 690 (e.g., a battery) for powering the various components, optionally, the power supply may be logically connected to the processor 680 via a power management system, so that the power management system may be used to manage charging, discharging, and power consumption.

Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.

The steps performed by the terminal device in the above-described embodiment may be based on the terminal device configuration shown in fig. 25.

Embodiments of the present application also provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the method described in the foregoing embodiments.

Embodiments of the present application also provide a computer program product including a program, which, when run on a computer, causes the computer to perform the methods described in the foregoing embodiments.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

41页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:虚拟世界中虚拟角色的控制方法、装置、存储介质及设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类