Virtual picture processing method and device

文档序号:1399864 发布日期:2020-03-06 浏览:27次 中文

阅读说明:本技术 一种虚拟画面处理方法及装置 (Virtual picture processing method and device ) 是由 李侃 苏泰梁 马钦 刘文剑 于 2019-11-14 设计创作,主要内容包括:本说明书提供一种虚拟画面处理方法及装置,其中所述方法包括:获取虚拟场景中的多个虚拟模型的运动速度;根据多个所述虚拟模型的运动速度将所述虚拟场景的显示画面划分为多个画面区域,并为每个画面区域设置对应的分辨率;根据每个所述画面区域对应的分辨率,生成每个所述画面区域对应的图像数据;将每个所述画面区域对应的图像数据分别传输至客户端,以使客户端合成完整的所述虚拟场景的显示画面。(The present specification provides a virtual image processing method and apparatus, wherein the method includes: obtaining the motion speeds of a plurality of virtual models in a virtual scene; dividing a display picture of the virtual scene into a plurality of picture areas according to the motion speeds of the virtual models, and setting corresponding resolution for each picture area; generating image data corresponding to each picture area according to the resolution corresponding to each picture area; and respectively transmitting the image data corresponding to each picture area to a client so that the client synthesizes a complete display picture of the virtual scene.)

1. A virtual picture processing method, comprising:

obtaining the motion speeds of a plurality of virtual models in a virtual scene;

dividing a display picture of the virtual scene into a plurality of picture areas according to the motion speeds of the virtual models, and setting corresponding resolution for each picture area;

generating image data corresponding to each picture area according to the resolution corresponding to each picture area;

and respectively transmitting the image data corresponding to each picture area to a client so that the client synthesizes a complete display picture of the virtual scene.

2. The method of claim 1, wherein obtaining the motion velocities of the plurality of virtual models in the virtual scene comprises:

acquiring a plurality of virtual models in the virtual scene;

and calculating the movement speed of each virtual model according to the position change of each virtual model in the current frame and the adjacent frames.

3. The method of claim 1, wherein the dividing the display of the virtual scene into a plurality of picture regions according to the motion velocities of the plurality of virtual models and setting a corresponding resolution for each picture region comprises:

comparing the motion speed of each virtual model with a plurality of preset speed threshold values, and dividing the virtual models within the same speed threshold value interval;

dividing the display picture of the virtual scene into a plurality of picture areas according to the display picture of the virtual scene corresponding to the virtual model within the same speed threshold interval;

and setting corresponding resolution for each picture area according to a preset corresponding relation rule.

4. The method of claim 3, wherein the dividing the display view of the virtual scene into a plurality of view regions comprises:

and dividing the display picture of the virtual scene into a plurality of rectangular picture areas.

5. The method of claim 1, further comprising, after setting a corresponding resolution for each picture region:

setting a boundary area at the boundary of two adjacent picture areas;

and setting the resolution of the boundary area to gradually change from the resolution of one of the two adjacent picture areas to the resolution of the other picture area.

6. The method of claim 1, wherein the generating image data corresponding to each of the frame regions comprises:

and drawing each picture area according to the resolution corresponding to each picture area to generate the image data of each picture area.

7. A virtual picture processing apparatus, comprising:

a speed acquisition module configured to acquire motion speeds of a plurality of virtual models in a virtual scene;

a picture segmentation module configured to divide a display picture of the virtual scene into a plurality of picture regions according to the motion speeds of the plurality of virtual models, and set a corresponding resolution for each picture region;

the data generation module is configured to generate image data corresponding to each picture area according to the resolution corresponding to each picture area;

and the data transmission module is configured to transmit the image data corresponding to each picture area to the client respectively so that the client synthesizes a complete display picture of the virtual scene.

8. The apparatus of claim 7, wherein the speed acquisition module comprises:

a model acquisition unit configured to acquire a plurality of the virtual models in the virtual scene;

and the speed calculation unit is configured to calculate the movement speed of each virtual model according to the position change of each virtual model in the current frame and the adjacent frames.

9. The apparatus of claim 7, wherein the picture segmentation module comprises:

the model dividing unit is configured to compare the motion speed of each virtual model with a plurality of preset speed threshold values and divide the virtual models in the same speed threshold value interval;

the picture dividing unit is configured to divide a display picture of the virtual scene into a plurality of picture areas according to the display picture of the virtual scene corresponding to the virtual model located in the same speed threshold interval;

and a resolution setting unit configured to set a corresponding resolution for each picture area according to a preset correspondence of speed to resolution.

10. The apparatus of claim 9, wherein the picture division unit is further configured to:

and dividing the display picture of the virtual scene into a plurality of rectangular picture areas.

11. The apparatus of claim 7, further comprising:

the boundary area configuration module is configured to set a boundary area at the boundary of two adjacent picture areas;

and the interface area setting module is configured to set the resolution of the interface area to gradually change from the resolution of one of the two adjacent picture areas to the resolution of the other picture area.

12. The apparatus of claim 7, wherein the data generation module comprises:

and the image drawing unit is configured to draw each picture area according to the resolution corresponding to each picture area and generate image data of each picture area.

13. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-6 when executing the instructions.

14. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 6.

Technical Field

The present disclosure relates to the field of animation image rendering and transmission technologies, and in particular, to a virtual image processing method, an apparatus, a computing device, and a computer-readable storage medium.

Background

In the existing three-dimensional scene pictures, the resolution of the display pictures of the same virtual scene is the same, especially for the game pictures with higher picture resolution, the processing amount of data is larger, meanwhile, with the development of computer technology, the three-dimensional game is becoming a popular choice in the field of electronic games, the three-dimensional game is a game mode based on cloud computing, in the operation mode of the three-dimensional game, all games are operated at a server end, the rendered game pictures are compressed and then transmitted to users through a network, client-side equipment at a client side can enjoy high-quality game pictures only by basic video decompression capacity without any high-end processor and display card, however, the transmission process of the three-dimensional game pictures in the prior art is limited by broadband cost and network resources, and game picture transmission frames often occur in the transmission process of the three-dimensional game with larger transmission data amount, resulting in a stuttering game screen and seriously affecting the game experience of the user.

Disclosure of Invention

In view of the above, embodiments of the present disclosure provide a virtual image processing method, a virtual image processing apparatus, a computing device, and a computer-readable storage medium, so as to solve technical defects in the prior art.

According to a first aspect of embodiments of the present specification, there is provided a virtual screen processing method including:

obtaining the motion speeds of a plurality of virtual models in a virtual scene;

dividing a display picture of the virtual scene into a plurality of picture areas according to the motion speeds of the virtual models, and setting corresponding resolution for each picture area;

generating image data corresponding to each picture area according to the resolution corresponding to each picture area;

and respectively transmitting the image data corresponding to each picture area to a client so that the client synthesizes a complete display picture of the virtual scene.

According to a second aspect of embodiments herein, there is provided an image processing apparatus comprising:

a speed acquisition module configured to acquire motion speeds of a plurality of virtual models in a virtual scene;

a picture segmentation module configured to divide a display picture of the virtual scene into a plurality of picture regions according to the motion speeds of the plurality of virtual models, and set a corresponding resolution for each picture region;

the data generation module is configured to generate image data corresponding to each picture area according to the resolution corresponding to each picture area;

and the data transmission module is configured to transmit the image data corresponding to each picture area to the client respectively so that the client synthesizes a complete display picture of the virtual scene.

According to a third aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the virtual picture processing method when executing the instructions.

According to a fourth aspect of embodiments of the present specification, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the virtual picture processing method.

This application will according to a plurality of virtual model's rate of motion the display screen of virtual scene divides into a plurality of picture regions, and set up corresponding resolution ratio for every picture region, the picture region to high-speed motion adopts low resolution to show, and low-speed motion picture region adopts the high resolution to show, thereby the system is practiced thrift to the handling capacity of image data, when using as cloud game, can divide into a plurality of parts with image data and synthesize after transmitting the client, very big saving the bandwidth, improve the condition that the game picture transmission of cloud game stagnates the frame and the picture card is pause, make the player obtain good gaming experience.

Drawings

FIG. 1 is a block diagram of a computing device provided by an embodiment of the present application;

fig. 2 is a flowchart of a virtual image processing method provided in an embodiment of the present application;

fig. 3 is another flowchart of a virtual image processing method according to an embodiment of the present application;

fig. 4 is another flowchart of a virtual image processing method according to an embodiment of the present application;

fig. 5 is a schematic diagram of a virtual screen processing method according to an embodiment of the present application;

fig. 6 is another flowchart of a virtual image processing method according to an embodiment of the present application;

fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.

Detailed Description

In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.

The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.

It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.

First, the noun terms to which one or more embodiments of the present invention relate are explained.

Virtual scene: generally refers to a three-dimensional virtual scene generated by a client device, such as a three-dimensional game scene or the like.

Virtual model: generally refers to a three-dimensional model in a three-dimensional virtual scene, such as a virtual object, task, or environment in a three-dimensional game scene.

Resolution ratio: the resolution is the resolution of the screen display, the screen resolution determines the setting of how much information is displayed on the computer screen, and the resolution is the number of pixels displayed on the screen, and the resolution of 160 × 128 means that the number of pixels in the horizontal direction is 160, the number of pixels in the vertical direction is 128, and the display effect is finer and finer when the resolution is higher under the condition that the screen size is the same.

In the present application, a virtual screen processing method, an apparatus, a computing device and a computer readable storage medium are provided, which are described in detail in the following embodiments one by one.

FIG. 1 shows a block diagram of a computing device 100, according to an embodiment of the present description. The components of the computing device 100 include, but are not limited to, memory 110 and processor 120. The processor 120 is coupled to the memory 110 via a bus 130 and a database 150 is used to store data.

Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 140 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.

In one embodiment of the present description, the above-described components of computing device 100 and other components not shown in FIG. 1 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 1 is for purposes of example only and is not limiting as to the scope of the description. Those skilled in the art may add or replace other components as desired.

Computing device 100 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.

Wherein the processor 120 may perform the steps of the method shown in fig. 2. Fig. 2 is a schematic flowchart illustrating a virtual screen processing method according to an embodiment of the present application, including step 202 to step 208.

Step 202: the motion speeds of a plurality of virtual models in a virtual scene are obtained.

In the embodiment of the application, a plurality of virtual models for composing the virtual scene exist in the three-dimensional virtual scene, and relative motion relationship often exists between the virtual models and the virtual lens of the user, further, in the virtual scene related to the three-dimensional game, especially for the three-dimensional game with the first person perspective, the virtual lens of the three-dimensional game observes the virtual scene of the whole three-dimensional game with the perspective of the player as the first perspective, and the virtual model controlled by the player generates relative motion relationship with the virtual model controlled by other players or other virtual models in the three-dimensional game during the game, for example, in the first person shooting game and racing game, the player usually controls a certain virtual model (character or racing car) in the first perspective and moves on a sand table or a map composed of other virtual models, such as walking, running, shooting, driving, etc., there will be relative movement speeds between the virtual lens with the player as the first perspective and the plurality of virtual models in the virtual scene, wherein the three-dimensional game may be a cloud game running on a server, and the server may obtain the movement speeds of the plurality of virtual models in the virtual scene of the three-dimensional game corresponding to the cloud game relative to the virtual lens.

Step 204: and dividing the display picture of the virtual scene into a plurality of picture areas according to the motion speeds of the plurality of virtual models, and setting corresponding resolution for each picture area.

In the embodiment of the application, when a plurality of virtual scenes have relative motion relations with a virtual lens, a server divides a display screen of the virtual scene into at least two screen areas according to the motion speed of each virtual model relative to the virtual lens, and sets a corresponding resolution for each screen area, wherein the setting principle is as follows: for example, when the server is used as a cloud game, when the server preprocesses a display screen of the cloud game, the display screen of the cloud game is divided into at least two screen areas according to the movement speed of each virtual model relative to a virtual lens, and the resolution corresponding to each screen area is set.

Step 206: and generating image data corresponding to each picture area according to the resolution corresponding to each picture area.

In the embodiment of the application, the server performs the viewing cone clipping and the image rendering of the virtual lens according to the resolution set by each picture area, that is, performs the drawing on each picture area according to the resolution corresponding to each picture area, and generates the image data of each picture area.

Step 208: and respectively transmitting the image data corresponding to each picture area to a client so that the client synthesizes a complete display picture of the virtual scene.

In the embodiment of the application, after the server transmits the image data corresponding to each picture area to the client through the network, the client side synthesizes and restores the image data corresponding to each picture area into a display picture of a complete virtual scene again and displays the display picture in real time, wherein, the client comprises a desktop computer, a set-top box, a mobile terminal and the like, and when the client is used as a cloud game, the client is limited to sending and receiving data and presenting a display screen of the cloud game, the storage and the operation of the cloud game are finished on the server, when playing the game, the player operates the client to send data to the server, the server runs the game according to the operation, the display picture of the cloud game is coded and compressed, and returning to the client through the network, and finally decoding and outputting a display picture of the cloud game by the client.

This application will according to a plurality of virtual model's rate of motion the display screen of virtual scene divides into a plurality of picture regions, and set up corresponding resolution ratio for every picture region, the picture region to high-speed motion adopts low resolution to show, and low-speed motion picture region adopts the high resolution to show, thereby the system is practiced thrift to the handling capacity of image data, when using as cloud game, can divide into a plurality of parts with image data and synthesize after transmitting the client, very big saving the bandwidth, improve the condition that the game picture transmission of cloud game stagnates the frame and the picture card is pause, make the player obtain good gaming experience.

In one embodiment of the present application, as shown in fig. 3, the acquiring the motion velocities of the plurality of virtual models in the virtual scene includes steps 302 to 304:

step 302: obtaining a plurality of the virtual models in the virtual scene.

In the embodiment of the application, the server may obtain, in real time, a plurality of virtual models in the virtual scene, that is, in a case that a virtual lens in the virtual scene moves in the virtual scene, the virtual lens in the virtual scene may generate a relative motion relationship with a different virtual model in the virtual scene and observe into a viewing cone range, for example, when a cloud game is used as a first person perspective, a player character may move at different positions in a three-dimensional game virtual scene corresponding to the cloud game and observe various virtual models along the way, such as various environments, non-player characters, task props and the like, and the server may obtain, in real time, the plurality of virtual models in the three-dimensional game virtual scene observed by the virtual lens.

Step 304: and calculating the movement speed of each virtual model according to the position change of each virtual model in the current frame and the adjacent frames.

In the embodiment of the application, the server may calculate, according to the position change condition of each virtual model in the current frame and the adjacent frame, the movement speed of each virtual model relative to the virtual lens at the current moment.

In another embodiment of the present application, as shown in fig. 4, the dividing the display screen of the virtual scene into a plurality of screen regions according to the motion speeds of the plurality of virtual models, and setting a corresponding resolution for each screen region includes steps 402 to 406:

step 402: and comparing the motion speed of each virtual model with a plurality of preset speed threshold values, and dividing the virtual models positioned in the same speed threshold value interval.

In the embodiment of the application, a plurality of speed thresholds are preset in the server, a speed threshold interval is formed between two adjacent speed thresholds, the server compares the moving speed of each virtual model relative to the virtual lens with the speed thresholds, and divides the virtual models located in the same speed threshold interval into the same range, for example, in a three-dimensional game of racing cars with a first human visual angle as shown in fig. 5, a plurality of speed thresholds, such as 20m/s, 40m/s, 60m/s, 80m/s and the like, are preset in the server, the virtual lens in the game is always observed with a racing car model as the first human visual angle, the racing car model generates different relative moving speeds with the racing track model and the environment models on both sides of the race track model during the running process of the game, and the server can generate different relative moving speeds according to the racing track model and the environment models relative to the racing car model of the racing car model during the running process of the game The movement speeds are classified into the same category or different categories.

Step 404: and dividing the display picture of the virtual scene into a plurality of picture areas according to the display picture of the virtual scene corresponding to the virtual model positioned in the same speed threshold interval.

In the embodiment of the present application, in the three-dimensional game of racing car class in the first person perspective as shown in fig. 5, when the movement of the track model and the movement of the environment model with respect to the racing car model (virtual shot) are different, the server divides the display screen of the virtual scene into three areas, namely, an area a1, an area a2 and an area B, according to the display screen of the virtual scene corresponding to the track model and the environment model, wherein the area a1 and the area a2 are both environment models.

Step 406: and setting corresponding resolution for each picture area according to a preset corresponding relation rule.

In the embodiment of the present application, the server sets a corresponding resolution for each screen area according to a preset rule of correspondence, where the rule of correspondence includes a correspondence between each of the speed threshold intervals and the resolution, for example, in a three-dimensional game of racing car class at a first person perspective as shown in fig. 5, a speed threshold interval between a speed threshold 20m/s and a speed threshold 40m/s may correspond to a resolution of 1600 × 1200, a speed threshold interval between a speed threshold 40m/s and a speed threshold 60m/s may correspond to a resolution of 1200 × 800, a speed threshold interval between a speed threshold 60m/s and a speed threshold 80m/s may correspond to a resolution of 640 × 480, and the rule of correspondence satisfies that the resolution corresponding to the screen area with a fast relative movement speed is low, and setting the high resolution corresponding to the picture area with the slow relative motion speed.

In the above embodiment, the dividing the display screen of the virtual scene into a plurality of screen areas includes:

and dividing the display picture of the virtual scene into a plurality of rectangular picture areas.

This application will according to a plurality of virtual model's velocity of motion virtual scene's display screen divides into a plurality of picture regions to set up corresponding resolution ratio for every picture region, when using as cloud game, need not to render according to unified resolution ratio to the picture of cloud game adopt the low resolution ratio to render under the very fast condition of virtual model's velocity of motion, thereby saved the flow of image data, improved the condition that cloud game picture is stuck.

In another embodiment of the present application, as shown in fig. 6, after setting the corresponding resolution for each picture area, steps 602 to 604 are further included:

step 602: and setting a boundary area at the boundary of two adjacent picture areas.

Step 604: and setting the resolution of the boundary area to gradually change from the resolution of one of the two adjacent picture areas to the resolution of the other picture area.

In an embodiment of the present application, the server may set a boundary area at a boundary between two adjacent screen areas, where a resolution of the boundary area continuously changes, so as to enable a resolution difference between the two adjacent screen areas to be seamlessly joined, and visually make a resolution change of the two adjacent screen areas inconspicuous, for example, in a three-dimensional game of racing cars at a first person viewing angle as shown in fig. 5, assuming that a resolution corresponding to the track model is 640 × 480 and a resolution corresponding to the environment model is 1200 × 800, a boundary area is set between a screen area corresponding to the track model and a screen area corresponding to the environment model, and a resolution of the boundary area gradually changes from 640 × 480 to 1200 × 800.

According to the method and the device, the boundary areas for transition are arranged between the picture areas with different resolutions, so that the boundary parts between the picture areas are displayed softly visually.

Corresponding to the above method embodiment, the present specification also provides an image processing apparatus embodiment, and fig. 7 shows a schematic structural diagram of an image processing apparatus according to an embodiment of the present specification. As shown in fig. 7, the apparatus includes:

a speed obtaining module 701 configured to obtain motion speeds of a plurality of virtual models in a virtual scene;

a picture dividing module 702 configured to divide a display picture of the virtual scene into a plurality of picture regions according to the moving speeds of the plurality of virtual models, and set a corresponding resolution for each picture region;

a data generating module 703 configured to generate image data corresponding to each of the picture areas according to a resolution corresponding to each of the picture areas;

a data transmission module 704 configured to transmit the image data corresponding to each of the screen regions to the client, so that the client synthesizes a complete display screen of the virtual scene.

Optionally, the speed obtaining module 701 includes:

a model acquisition unit configured to acquire a plurality of the virtual models in the virtual scene;

and the speed calculation unit is configured to calculate the movement speed of each virtual model according to the position change of each virtual model in the current frame and the adjacent frames.

Optionally, the picture segmentation module 702 includes:

the model dividing unit is configured to compare the motion speed of each virtual model with a plurality of preset speed threshold values and divide the virtual models in the same speed threshold value interval;

the picture dividing unit is configured to divide a display picture of the virtual scene into a plurality of picture areas according to the display picture of the virtual scene corresponding to the virtual model located in the same speed threshold interval;

and a resolution setting unit configured to set a corresponding resolution for each picture area according to a preset correspondence of speed to resolution.

Optionally, the picture dividing unit is further configured to:

and dividing the display picture of the virtual scene into a plurality of rectangular picture areas.

Optionally, the apparatus further comprises:

a boundary area configuration module 705 configured to set a boundary area at a boundary between two adjacent screen areas;

an interface region setting module 706 configured to set a resolution of the interface region to gradually change from a resolution of one of the two adjacent picture regions to a resolution of the other picture region.

Optionally, the data generating module 703 includes:

and the image drawing unit is configured to draw each picture area according to the resolution corresponding to each picture area and generate image data of each picture area.

This application will according to a plurality of virtual model's rate of motion the display screen of virtual scene divides into a plurality of picture regions, and set up corresponding resolution ratio for every picture region, the picture region to high-speed motion adopts low resolution to show, and low-speed motion picture region adopts the high resolution to show, thereby the system is practiced thrift to the handling capacity of image data, when using as cloud game, can divide into a plurality of parts with image data and synthesize after transmitting the client, very big saving the bandwidth, improve the condition that the game picture transmission of cloud game stagnates the frame and the picture card is pause, make the player obtain good gaming experience.

An embodiment of the present application further provides a computing device, including a memory, a processor, and computer instructions stored on the memory and executable on the processor, where the processor executes the instructions to implement the following steps:

obtaining the motion speeds of a plurality of virtual models in a virtual scene;

dividing a display picture of the virtual scene into a plurality of picture areas according to the motion speeds of the virtual models, and setting corresponding resolution for each picture area;

generating image data corresponding to each picture area according to the resolution corresponding to each picture area;

and respectively transmitting the image data corresponding to each picture area to a client so that the client synthesizes a complete display picture of the virtual scene.

An embodiment of the present application further provides a computer-readable storage medium, which stores computer instructions, and the computer instructions, when executed by a processor, implement the steps of the virtual image processing method as described above.

The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the computer-readable storage medium and the technical solution of the virtual image processing method belong to the same concept, and details that are not described in detail in the technical solution of the computer-readable storage medium can be referred to the description of the technical solution of the virtual image processing method.

The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.

The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.

It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:键鼠设置方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类