Virtual reality game system based on cloud computing technology
阅读说明:本技术 基于云计算技术的虚拟现实游戏系统 (Virtual reality game system based on cloud computing technology ) 是由 袁雪梅 于 2020-05-26 设计创作,主要内容包括:本申请公开基于云计算技术的虚拟现实游戏系统,包括虚拟现实设备、边缘计算设备以及云服务器,虚拟现实设备在采集到针对第一虚拟现实游戏画面的第一用户操作后,向边缘计算设备发送第一画面获取请求;边缘计算设备根据第一画面获取请求从本地画面集中获取第二虚拟现实游戏画面,并根据第二虚拟现实游戏画面确定预测操作标识集,将第二虚拟现实游戏画面发送给虚拟现实设备,将预测操作标识集发送给云服务器;虚拟现实设备运行第二虚拟现实游戏画面;云服务器生成第三虚拟现实游戏画面,将第三虚拟现实游戏画面发送给边缘计算设备;边缘计算设备将第三虚拟现实游戏画面保存至本地画面集中。该技术方案可降低在家庭中使用VR游戏的成本。(The application discloses a virtual reality game system based on a cloud computing technology, which comprises virtual reality equipment, edge computing equipment and a cloud server, wherein the virtual reality equipment sends a first picture acquisition request to the edge computing equipment after acquiring first user operation aiming at a first virtual reality game picture; the edge computing equipment acquires a second virtual reality game picture from the local picture set according to the first picture acquisition request, determines a prediction operation identification set according to the second virtual reality game picture, sends the second virtual reality game picture to the virtual reality equipment, and sends the prediction operation identification set to the cloud server; the virtual reality equipment runs a second virtual reality game picture; the cloud server generates a third virtual reality game picture and sends the third virtual reality game picture to the edge computing equipment; the edge computing device saves the third virtual reality game screen to the local screen set. This technical scheme can reduce the cost of using the VR recreation at home.)
1. A virtual reality game system based on cloud computing technology comprises virtual reality equipment, edge computing equipment and a cloud server, wherein:
the virtual reality device is used for executing the following steps: after first user operation aiming at a first virtual reality game picture running on the virtual reality equipment is collected, sending a first picture acquisition request to the edge computing equipment, wherein the first picture acquisition request is used for requesting to acquire a second virtual reality game picture corresponding to the first user operation;
the edge computing device is configured to perform the steps of: receiving the first picture acquisition request, acquiring the second virtual reality game picture from a local picture set according to the first picture acquisition request, and determining a prediction operation identification set according to the second virtual reality game picture, wherein the prediction operation identification set is used for indicating at least one next user operation corresponding to the first user operation; sending the second virtual reality game picture to the virtual reality equipment, and sending the prediction operation identification set to a cloud server;
the virtual reality device is further configured to perform the steps of: receiving the second virtual reality game picture, and responding to the first user operation to run the second virtual reality game picture on the virtual reality equipment;
the cloud server is used for executing the following steps: generating a third virtual reality game picture according to the prediction operation identification set, wherein the third virtual reality game picture is a virtual reality game picture corresponding to the user operation indicated by the prediction operation identification set; sending the third virtual reality game picture to an edge computing device;
the edge computing device is further configured to perform the steps of: and receiving the third virtual reality game picture, and storing the third virtual reality game picture to the local picture set.
2. The system of claim 1,
the virtual reality device is further configured to perform the steps of: collecting brain wave feedback information and user visual angle information aiming at the first virtual reality game picture; sending the brain wave feedback information and the user perspective information to the edge computing device;
the edge computing device is further configured to perform the steps of: receiving the brain wave feedback information and the user perspective information; analyzing the user emotion corresponding to the first virtual reality game picture based on the brain wave feedback information; generating a parameter adjusting instruction according to the user emotion and the user visual angle information; sending the parameter adjusting instruction to the virtual reality equipment;
the virtual reality device is further configured to perform the steps of: receiving the parameter adjusting instruction; and adjusting the operation parameters of the virtual reality game picture operated on the virtual reality equipment according to the parameter adjustment instruction.
3. The system of claim 1,
the virtual reality device is further configured to perform the steps of: collecting brain wave feedback information for the first virtual reality game picture; sending the brain wave feedback information to the edge computing device;
the edge computing device is further configured to perform the steps of: analyzing the user emotion corresponding to the first virtual reality game picture based on the brain wave feedback information to obtain user emotion indication information;
the edge computing device is further configured to perform the steps of: adding the user emotion indication information into the prediction operation identification set;
the cloud server is further configured to perform the following steps: and when a third virtual reality game picture is generated according to the prediction operation identification set, adjusting the image quality parameters of the third virtual reality game picture to enable the adjusted third virtual reality game picture to be matched with the emotion of the user.
4. The system according to claim 2 or 3, wherein the edge computing device is specifically configured to perform the following steps in the process of analyzing the user emotion corresponding to the first virtual reality game screen based on the brain wave feedback information:
extracting features of the brain wave feedback information to obtain a feature vector corresponding to the brain wave feedback information;
inputting the characteristic vector into a preset brain wave analysis model to obtain an emotion recognition result of the brain wave analysis model; the brain wave analysis model comprises (m-1) layers, wherein m is the total number of user emotions which can be recognized by the brain wave analysis model; each layer of the brain wave analysis model consists of different numbers of emotion recognition models; the ith layer of the brain wave analysis model is provided with i emotion recognition models, each emotion recognition model is used for recognizing two user emotions, the first emotion recognition model of the ith layer is connected with the second emotion recognition model of the (i +1) th layer and the third emotion recognition model of the (i +1) th layer, wherein one user emotion recognized by the second emotion recognition model is the same as one user emotion recognized by the first emotion recognition model, and one user emotion recognized by the third emotion recognition model is the same as the other user emotion recognized by the first emotion recognition model; each layer of the brain wave analysis model has an emotion recognition result, the emotion recognition result of the (i +1) th layer is associated with the emotion recognition result of the i th layer, and the emotion recognition result of the (m-1) th layer is the emotion recognition result of the brain wave analysis model; i is more than or equal to 1 and less than or equal to m;
and determining the user emotion corresponding to the first virtual reality game picture according to the emotion recognition result of the brain wave analysis model.
5. The system according to claim 4, wherein the edge computing device is specifically configured to perform the following steps in a process of performing feature extraction on the brain wave feedback information to obtain a feature vector corresponding to the brain wave feedback information:
respectively determining a first feature vector, a second feature vector and a third feature vector according to the brain wave feedback information, wherein the first feature vector is used for representing energy distribution of the brain wave feedback information, the second feature vector is used for representing complexity of the brain wave feedback information, and the third feature vector is used for representing fractal features of the brain wave feedback information;
and obtaining a feature vector corresponding to the brainwave feedback information according to the first feature vector, the second feature vector and the third feature vector.
6. The system according to claim 5, wherein the edge computing device is specifically configured to perform the following steps in the process of determining the first feature vector, the second feature vector, and the third feature vector according to the electroencephalogram feedback information:
calculating the energy characteristics of the brain wave feedback information through discrete Fourier transform to obtain the first characteristic vector;
calculating the sample entropy of the brain wave feedback information to obtain the second feature vector;
and calculating the fractal feature of the brain wave feedback information through a Higuchi algorithm to obtain the third feature vector.
7. The system according to claim 5, wherein the edge computing device is specifically configured to perform the following steps in the process of determining the first feature vector, the second feature vector, and the third feature vector according to the electroencephalogram feedback information:
performing wavelet transformation and reconstruction on the brain wave feedback information to obtain a wavelet decomposition coefficient and four rhythm waves of a brain wave signal;
calculating wavelet energy and wavelet entropy according to the wavelet decomposition coefficient, and determining the wavelet energy and the wavelet entropy as the first feature vector;
calculating approximate entropies of the four rhythm waves, and determining the approximate entropies as second feature vectors;
and calculating the hessian indexes of the four rhythm waves, and determining the hessian index as the third eigenvector.
8. The system according to claim 5, wherein the edge computing device is specifically configured to perform the following steps in the process of determining the first feature vector, the second feature vector, and the third feature vector according to the electroencephalogram feedback information:
performing wavelet transformation and reconstruction on the brain wave feedback information to obtain a wavelet decomposition coefficient and four rhythm waves of a brain wave signal;
calculating the energy characteristics of the brain wave feedback information through discrete Fourier transform, calculating wavelet energy and wavelet entropy according to the wavelet decomposition coefficients, and determining the energy characteristics of the brain wave feedback information and the wavelet energy and wavelet entropy as the first characteristic vector;
calculating sample entropy of the brain wave feedback information, calculating approximate entropy of the four rhythm waves, and determining the sample entropy and the approximate entropy as the second feature vector;
and calculating the fractal characteristics of the brain wave feedback information through a Higuchi algorithm, calculating the hessian indexes of the four rhythm waves, and determining the fractal characteristics of the brain wave feedback information and the hessian index as the third eigenvector.
9. The system according to any one of claims 5 to 8, wherein the edge computing device is specifically configured to perform, in the process of obtaining the feature vector corresponding to the electroencephalogram feedback information according to the first feature vector, the second feature vector, and the third feature vector, the following steps:
and performing feature fusion on the first feature vector, the second feature vector and the third feature vector to obtain a feature vector corresponding to the brainwave feedback information.
10. The system according to claim 1, wherein the first screen acquisition request includes a first screen identifier and a first operation identifier, the first screen identifier is a screen identifier of the first virtual reality game screen, and the first operation identifier is an operation identifier of the first user operation;
the edge computing device is specifically configured to, in a process of acquiring the second virtual reality game screen from the local screen set according to the first screen acquisition request, perform the following steps:
determining a second picture identifier according to the first picture identifier and the first operation identifier, wherein the second picture identifier is a picture identifier of the second virtual reality game picture;
acquiring a plurality of picture materials corresponding to the second virtual reality game picture from the local picture set according to the second picture identification;
and rendering and generating a three-dimensional picture based on the plurality of picture materials to obtain the second virtual reality game picture.
Technical Field
The application relates to the field of games, in particular to a virtual reality game system based on a cloud computing technology.
Background
Virtual Reality (VR) games are a new game mode that has been created with the development of VR technology in recent years, and the principle thereof is to generate a three-dimensional virtual world by computer simulation, and provide the user with sensory simulations about vision, hearing, touch, and the like, thereby providing the user with an immersive experience. Because of the requirements of fidelity and substitution feeling, the VR game has high requirements on the graphic processing capacity of the equipment, and if a user wants to use the VR game at home, the VR equipment with high performance needs to be equipped at home, so that the cost is high.
Disclosure of Invention
The application provides a virtual reality game system based on cloud computing technology to solve the problem that the cost of using VR games in the family is high at present.
The application provides a virtual reality gaming system based on cloud computing technology, including virtual reality equipment, marginal computing device and cloud ware, wherein:
the virtual reality device is used for executing the following steps: after first user operation aiming at a first virtual reality game picture running on the virtual reality equipment is collected, a first picture obtaining request is sent to the edge computing equipment, and the first picture obtaining request is used for requesting to obtain a second virtual reality game picture corresponding to the first user operation;
the edge computing device is configured to perform the following steps: receiving the first picture acquisition request, acquiring the second virtual reality game picture from a local picture set according to the first picture acquisition request, and determining a prediction operation identifier set according to the second virtual reality game picture, wherein the prediction operation identifier set is used for indicating at least one next user operation corresponding to the first user operation; sending the second virtual reality game picture to the virtual reality equipment, and sending the prediction operation identifier set to a cloud server;
the virtual reality device is further configured to perform the following steps: receiving the second virtual reality game picture, and responding to the first user operation to run the second virtual reality game picture on the virtual reality equipment;
the cloud server is used for executing the following steps: generating a third virtual reality game picture according to the prediction operation identifier set, wherein the third virtual reality game picture is a virtual reality game picture corresponding to the user operation indicated by the prediction operation identifier set; sending the third virtual reality game picture to edge computing equipment;
the edge computing device is further configured to perform the following steps: and receiving the third virtual reality game picture, and storing the third virtual reality game picture into the local picture set.
In the system, after receiving a first picture acquisition request which is sent by virtual reality equipment based on user operation and is used for acquiring a virtual reality game picture, edge computing equipment acquires a second virtual reality game picture from a local picture set according to the first picture acquisition request and then sends the second virtual reality game picture to the virtual reality equipment, and the virtual reality equipment runs the second virtual reality game picture, so that the function of displaying the virtual reality game picture corresponding to the user operation is realized; and after receiving the first picture acquisition request, the edge computing device further determines a prediction operation identification set used for indicating the next user operation of the first user operation according to the first picture acquisition request, sends the prediction operation identification set to the cloud server, the cloud server generates a third virtual reality game picture according to the prediction operation identification set, and sends the third virtual reality game picture to the edge computing device, and the edge computing device stores the received third virtual reality game picture in a local picture set, so as to realize the pre-generation and storage of the virtual reality game picture. On one hand, because the virtual reality equipment only needs to display the virtual reality game picture and does not need to render to generate the virtual reality game picture, the performance requirement on the virtual reality equipment is reduced, and the cost of using the VR game in the family can be reduced; on the other hand, because the game picture is generated in advance by the cloud server and stored by the edge computing device, the interaction between the edge computing device and the VR device and the pre-generation of the game picture can reduce the rendering generation time delay of the game picture, and the reduction of the two time delays can shorten the time from the acquisition of the user operation by the VR device to the display of the VR game picture corresponding to the user operation by the VR device, so that the user experience of game blocking for the user can be avoided, and the user experience of the user when using the VR game can be ensured.
In a possible implementation manner, the virtual reality device is further configured to perform the following steps: collecting brain wave feedback information and user visual angle information aiming at the first virtual reality game picture; sending the brain wave feedback information and the user perspective information to the edge computing device;
the edge computing device is further configured to perform the following steps: receiving the brain wave feedback information and the user visual angle information; analyzing the user emotion corresponding to the first virtual reality game picture based on the brain wave feedback information; generating a parameter adjusting instruction according to the user emotion and the user visual angle information; sending the parameter adjusting instruction to the virtual reality equipment;
the virtual reality device is further configured to perform the following steps: receiving the parameter adjusting instruction; and adjusting the operation parameters of the virtual reality game picture operated on the virtual reality equipment according to the parameter adjustment instruction.
According to the system, the user emotion of the user on the VR game picture running on the VR equipment is analyzed through brain wave feedback information aiming at the VR game picture, and the virtual reality equipment is enabled to adjust the running parameters of the VR game picture according to the user emotion and the user visual angle, so that the VR game picture can be adaptive to the emotion and visual angle information of the user, and better user experience can be brought to the user.
In a possible implementation manner, the virtual reality device is further configured to perform the following steps: collecting brain wave feedback information aiming at the first virtual reality game picture; sending the brain wave feedback information to the edge computing device;
the edge computing device is further configured to perform the following steps: analyzing the user emotion corresponding to the first virtual reality game picture based on the brain wave feedback information to obtain user emotion indication information;
the edge computing device is further configured to perform the following steps: adding the user emotion indication information into the prediction operation identification set;
the cloud server is further configured to perform the following steps: and when a third virtual reality game picture is generated according to the prediction operation identification set, adjusting the image quality parameters of the third virtual reality game picture so that the adjusted third virtual reality game picture is matched with the emotion of the user.
According to the system, the user emotion of the user on the VR game picture running on the VR equipment is analyzed according to brain wave feedback information aiming at the VR game picture, and the indication information indicating the user emotion is carried in the prediction operation identification which indicates the cloud server to generate the VR game picture, so that the cloud server can adjust the image quality parameters of the VR game picture when the VR game picture is generated, the image quality parameters are matched with the user emotion, further, in the subsequent VR game process, the image quality parameters can accord with the user emotion, and the user experience of the user in using the VR game is improved.
In one possible implementation form of the method,
the edge computing device is specifically configured to, in a process of analyzing the user emotion corresponding to the first virtual reality game screen based on the brainwave feedback information, perform the following steps:
extracting features of the brain wave feedback information to obtain a feature vector corresponding to the brain wave feedback information;
inputting the characteristic vector into a preset brain wave analysis model to obtain an emotion recognition result of the brain wave analysis model; the brain wave analysis model comprises (m-1) layers, wherein m is the total number of user emotions which can be identified by the brain wave analysis model; each layer of the brain wave analysis model consists of different numbers of emotion recognition models; the ith layer of the brain wave analysis model is provided with i emotion recognition models, each emotion recognition model is used for recognizing two user emotions, the first emotion recognition model of the ith layer is connected with the second emotion recognition model of the (i +1) th layer and the third emotion recognition model of the (i +1) th layer, one user emotion recognized by the second emotion recognition model is the same as one user emotion recognized by the first emotion recognition model, and one user emotion recognized by the third emotion recognition model is the same as the other user emotion recognized by the first emotion recognition model; each layer of the brain wave analysis model has an emotion recognition result, the emotion recognition result of the (i +1) th layer is associated with the emotion recognition result of the i th layer, and the emotion recognition result of the (m-1) th layer is the emotion recognition result of the brain wave analysis model; i is more than or equal to 1 and less than or equal to m;
and determining the user emotion corresponding to the first virtual reality game picture according to the emotion recognition result of the brain wave analysis model.
According to the system, the characteristic vectors corresponding to the brain wave feedback information are identified and analyzed through the brain wave analysis model with the multilayer structure, so that the emotion of a user is determined, the identification logic is simple, the operation efficiency can be improved under the condition that more emotions of the user are identified as far as possible, and the rapid adjustment of the operation parameters is facilitated.
In one possible implementation, the above
The edge computing device is specifically configured to, in a process of performing feature extraction on the electroencephalogram feedback information to obtain a feature vector corresponding to the electroencephalogram feedback information, execute the following steps:
respectively determining a first feature vector, a second feature vector and a third feature vector according to the brain wave feedback information, wherein the first feature vector is used for representing energy distribution of the brain wave feedback information, the second feature vector is used for representing complexity of the brain wave feedback information, and the third feature vector is used for representing fractal features of the brain wave feedback information;
and obtaining a feature vector corresponding to the electroencephalogram feedback information according to the first feature vector, the second feature vector and the third feature vector.
According to the system, the emotion of the user can be jointly analyzed from multiple dimensions by extracting the feature vectors of the brain wave feedback information from multiple dimensions, so that the accuracy of determining the emotion of the user can be guaranteed.
In a possible implementation manner, in the process of determining the first feature vector, the second feature vector, and the third feature vector according to the electroencephalogram feedback information, the edge computing device is specifically configured to execute the following steps:
calculating the energy characteristics of the brain wave feedback information through discrete Fourier transform to obtain the first characteristic vector;
calculating the sample entropy of the brain wave feedback information to obtain the second feature vector;
and calculating the fractal characteristics of the brain wave feedback information through a Higuchi algorithm to obtain the third characteristic vector.
In a possible implementation manner, in the process of determining the first feature vector, the second feature vector, and the third feature vector according to the electroencephalogram feedback information, the edge computing device is specifically configured to execute the following steps:
performing wavelet transformation and reconstruction on the brain wave feedback information to obtain a wavelet decomposition coefficient and four rhythm waves of a brain wave signal;
calculating wavelet energy and wavelet entropy according to the wavelet decomposition coefficient, and determining the wavelet energy and the wavelet entropy as the first feature vector;
calculating approximate entropies of the four rhythm waves, and determining the approximate entropies as second feature vectors;
calculating the Hurst exponent of the four rhythm waves, and determining the Hurst exponent as the third eigenvector.
In a possible implementation manner, in the process of determining the first feature vector, the second feature vector, and the third feature vector according to the electroencephalogram feedback information, the edge computing device is specifically configured to execute the following steps:
performing wavelet transformation and reconstruction on the brain wave feedback information to obtain a wavelet decomposition coefficient and four rhythm waves of a brain wave signal;
calculating the energy characteristics of the brain wave feedback information through discrete Fourier transform, calculating wavelet energy and wavelet entropy according to the wavelet decomposition coefficient, and determining the energy characteristics of the brain wave feedback information and the wavelet energy and wavelet entropy as the first characteristic vector;
calculating sample entropy of the brain wave feedback information, calculating approximate entropy of the four rhythm waves, and determining the sample entropy and the approximate entropy as the second feature vector;
and calculating the fractal characteristics of the brain wave feedback information through a Higuchi algorithm, calculating the hessian indexes of the four rhythm waves, and determining the fractal characteristics of the brain wave feedback information and the hessian index as the third eigenvector.
In a possible implementation manner, the edge computing device is specifically configured to, in a process of obtaining a feature vector corresponding to the electroencephalogram feedback information according to the first feature vector, the second feature vector, and the third feature vector, perform the following steps:
and performing feature fusion on the first feature vector, the second feature vector and the third feature vector to obtain a feature vector corresponding to the electroencephalogram feedback information.
According to the system, the feature vectors which can feed back electroencephalogram features most can be extracted by performing feature fusion on the first feature vector, the second feature vector and the third feature vector, the vector dimensionality of the feature vectors corresponding to electroencephalogram feedback information is reduced, the complexity of follow-up emotion recognition calculation can be reduced by reducing the vector dimensionality on the premise of ensuring recognition accuracy, and therefore emotion recognition efficiency is improved.
In one possible implementation form of the method,
the first picture acquisition request comprises a first picture identifier and a first operation identifier, the first picture identifier is a picture identifier of the first virtual reality game picture, and the first operation identifier is an operation identifier operated by the first user;
the edge computing device is specifically configured to, in a process of acquiring the second virtual reality game screen from the local screen set according to the first screen acquisition request, execute the following steps:
determining a second picture identifier according to the first picture identifier and the first operation identifier, wherein the second picture identifier is a picture identifier of the second virtual reality game picture;
acquiring a plurality of picture materials corresponding to the second virtual reality game picture from the local picture set according to the second picture identification;
and rendering and generating a three-dimensional picture based on the plurality of picture materials to obtain the second virtual reality game picture.
According to the system, the game picture of the VR game can be generated quickly and in real time by carrying the identifier indicating the running game picture and the executed operation identifier in the request for obtaining the game picture and then obtaining the picture material to finish the picture rendering.
The application can realize the following beneficial effects: reducing the cost of using VR games at home; the time from the VR equipment to the VR equipment for displaying the VR game picture corresponding to the user operation is shortened, the user experience of game blocking caused by the user can be avoided, and the user experience of the user using the VR game is guaranteed.
Drawings
Fig. 1 is a schematic diagram of an edge computing-based VR game network system architecture provided in an embodiment of the present application;
fig. 2 is a schematic block diagram of a virtual reality game system based on cloud computing technology according to an embodiment of the present application;
fig. 3 is a schematic diagram of an association relationship between a VR game screen and a user operation according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an electroencephalogram analysis model and a recognition logic of the electroencephalogram analysis model according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a virtual reality device according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of an edge computing device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a cloud server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The technical scheme of the application can be applied to the operation scene of the VR game. Based on the current running scene of the VR game, the application applies the edge computing technology and the cloud computing technology to the VR game, provides a new system architecture for the VR game, and obtains the virtual reality game system 10 based on the cloud computing technology. A virtual reality gaming system 10 based on cloud computing technology may be as shown in fig. 1, including a
By applying edge computing and cloud computing device technology in a VR game scene, functions executed or realized on VR equipment can be transferred to the edge computing device to be realized, the normal operation of the VR game is ensured by using the low-delay characteristic of the edge computing, and the VR equipment only needs to have some basic functions (such as a display function and a communication function), so that the cost of using the VR game in a family can be reduced.
The virtual reality game system based on the cloud computing technology of the present application is specifically described next.
Referring to fig. 2, fig. 2 is a schematic block diagram of a virtual reality game system based on cloud computing technology according to an embodiment of the present disclosure, and as shown in fig. 2, the virtual reality game system 20 based on cloud computing technology includes a
the
Specifically, the
It can be understood that the first virtual reality game screen refers to a VR game screen that is displayed on the virtual reality device in real time, and may also be understood as a VR game screen that is currently displayed on the virtual reality device at the present moment. Here, the current time is a time when something is happening. In the embodiment of the present application, the current time refers to the time when the VR game screen is being presented. The first user operation refers to a certain user operation performed by a user based on a first virtual reality game screen, and an actual control object of the user operation is the first virtual reality game screen, wherein the actual control object refers to an object actually controlled and acted on by operating various operating peripherals (such as a game pad, a game terminal and the like). The second virtual reality game picture refers to a VR game picture that should be displayed on the virtual reality device after the first user operation is performed on the first virtual reality game picture.
The
In the embodiment of the present application, the local screen set is one or more local folders/local databases used for storing screen materials of VR game screens or VR game screens generated in advance for the VR game screen currently displayed by the
For example, 5 operations, i.e., operation 1, operation 2, operation 3, operation 4, and operation 5, may be performed on the VR game screen a being displayed on the
Alternatively, the local game screen set may include various contents for performing identification or establishing various association relationships, such as a correspondence relationship between VR game screens and user operations, an association relationship between VR game screens stored in the local game screen set and VR game screens currently being displayed by the
In a first possible scenario, the second virtual reality game screen is saved in the local screen set after being rendered by the
Specifically, the first picture acquisition request may carry a first picture identifier and a first operation identifier, where the first picture identifier and the first operation identifier are respectively a picture identifier of the first game picture and an operation identifier of the first user operation, and are respectively used to uniquely indicate the first game picture and the first user operation; the
Optionally, the first image obtaining request may also carry other content related to the first game image and the first user operation, so that the
In a second possible scenario, the
In this embodiment, the at least one next user operation corresponding to the user operation refers to an operation executed based on the second virtual reality game screen, that is, an actual control object of the next user operation is the second virtual reality game screen. Generally, the at least one next user operation corresponding to the user operation refers to all user operations executable on the basis of the second virtual reality screen.
In a specific implementation, the
By way of example, referring to fig. 3, fig. 3 shows an association relationship between a VR game screen and a user operation. As can be seen from fig. 3, if the user operation is the user operation a, it can be determined that the next user operation of the user operation a is the user operation a1, the user operation a2, and the user operation a 3. The predicted operation identifier set is a set of operation identifiers indicating user operation a1, user operation a2, and user operation a 3.
The
Specifically, the
The
It is understood that the third virtual reality game screen is a VR game screen that should be displayed on the virtual reality device after the user operation indicated by the prediction operation instruction set is performed on the second virtual reality game screen. In a first possible scenario, the
It can be seen that before the edge computing device receives the request for acquiring the virtual reality game screen, the
It should be noted that, the aforementioned rendering and generation of the virtual reality game picture by the
The
In the system, after receiving a first picture acquisition request sent by the virtual reality device based on user operation to acquire a virtual reality game picture, the edge computing device collectively acquires a second virtual reality game picture from a local picture according to the first picture acquisition request, and then sends the second virtual reality game picture to the virtual reality device, and the virtual reality device runs the second virtual reality game picture, thereby realizing the function of displaying the virtual reality game picture corresponding to the user operation; and after receiving the first picture acquisition request, the edge computing device further determines a prediction operation identification set used for indicating the next user operation of the first user operation according to the first picture acquisition request, sends the prediction operation identification set to the cloud server, the cloud server generates a third virtual reality game picture according to the prediction operation identification set, and sends the third virtual reality game picture to the edge computing device, and the edge computing device stores the received third virtual reality game picture in a local picture set, so as to realize the pre-generation and storage of the virtual reality game picture. On one hand, because the virtual reality equipment only needs to display the virtual reality game picture and does not need to render to generate the virtual reality game picture, the performance requirement on the virtual reality equipment is reduced, and the cost of using the VR game in the family can be reduced; on the other hand, because the game picture is generated in advance by the cloud server and stored by the edge computing device, the interaction between the edge computing device and the VR device and the pre-generation of the game picture can reduce the rendering generation time delay of the game picture, and the reduction of the two time delays can shorten the time from the acquisition of the user operation from the VR device to the display of the VR game picture corresponding to the user operation by the VR device, thereby avoiding the user experience of game card pause for the user, and ensuring the user experience when the user uses the VR game
In some possible embodiments, the virtual reality game system based on the cloud computing technology may further acquire brainwave information of the user through the VR device, analyze emotion of the user based on the brainwave information, and adjust image quality parameters of a game picture of the VR game based on the emotion of the user.
The
Here, the brain wave feedback information is brain wave signals collected by the
The
In this embodiment, the
In one possible implementation, the user emotion classification model may be a tree structure-based brainwave analysis model.
In some embodiments, the brainwave analysis model is used for identifying m user emotions, and the brainwave analysis model may include (m-1) layers, each layer of the brainwave analysis model being composed of a different number of emotion recognition models, each emotion recognition model being used for identifying two user emotions. The first emotion recognition model of the ith layer is connected with the second emotion recognition model of the (i +1) th layer and the third emotion recognition model of the (i +1) th layer, wherein one user emotion capable of being recognized by the second emotion recognition model is the same as one user emotion capable of being recognized by the first emotion recognition model, and one user emotion capable of being recognized by the third emotion recognition model is the same as the other user emotion capable of being recognized by the first emotion recognition model.
The specific logic of the brain wave analysis model for classifying and identifying the feature vectors corresponding to the brain wave feedback information may be as follows: taking the 1 st emotion recognition model as a target emotion recognition model of the 1 st layer, inputting the feature vector corresponding to the brainwave feedback information into the target emotion recognition model of the 1 st layer, and determining an emotion recognition result of the target emotion recognition model of the 1 st layer as an emotion recognition result of the 1 st layer; according to the emotion recognition result of the layer 1, determining a target emotion recognition model of the layer 2 from a second emotion recognition model and a third emotion recognition model of the layer 2 which are connected with the target emotion recognition model of the layer 1, inputting a feature vector corresponding to brain wave feedback information into the target emotion recognition model of the layer 2, and determining the emotion recognition result of the target emotion recognition model of the layer 2 as the emotion recognition result of the layer 2; in the same way, until the emotion recognition result of the target emotion recognition model on the (m-1) th layer is obtained, determining the emotion recognition result of the target emotion recognition model on the (m-1) th layer as the emotion recognition result of the brain wave analysis model; and determining the user emotion corresponding to the first virtual reality game picture according to the emotion recognition result of the brain wave analysis model. For the target emotion recognition model of the ith layer, if the emotion recognition result of the target emotion recognition model of the ith layer corresponds to one of the user emotions recognizable by the target emotion recognition model of the ith layer, determining the second emotion recognition model of the (i +1) th layer connected with the target emotion recognition model of the ith layer as the target emotion recognition model of the (i +1) th layer; and if the emotion recognition result of the target emotion recognition model of the ith layer corresponds to another user emotion recognizable by the target emotion recognition model of the ith layer, determining the third emotion recognition model of the (i +1) th layer connected with the target emotion recognition model of the ith layer as the target emotion recognition model of the (i +1) th layer.
For example, the identification logics of the electroencephalogram analysis model and the electroencephalogram analysis model are described, where m is 4 as an example, and it is assumed that the user emotion is user emotion 1, user emotion 2, user emotion 3, and user emotion 4, respectively. Referring to fig. 4, fig. 4 is a schematic diagram of an electroencephalogram analysis model and a recognition logic of the electroencephalogram analysis model according to an embodiment of the present application. As shown in fig. 4, each node in fig. 4 is an emotion recognition model, which is an emotion recognition model M1-M6, where the emotion recognition model M1 is used to recognize user emotion 1 and user emotion 2, the emotion recognition model M2 is used to recognize user emotion 1 and user emotion 3, the emotion recognition model M3 is used to recognize user emotion 2 and user emotion 4, the emotion recognition model M4 is used to recognize user emotion 1 and user emotion 4, the emotion recognition model M5 is used to recognize user emotion 2 and user emotion 3, and the emotion recognition model M6 is used to recognize user emotion 3 and user emotion 4. When the emotion recognition result of the emotion recognition model M1 corresponds to the user emotion 1, the emotion recognition model M2 is determined as the target emotion recognition model of layer 2, and when the emotion recognition result of the emotion recognition model M1 corresponds to the user emotion 2, the emotion recognition model M3 is determined as the target emotion recognition model of layer 2. Similarly, if the emotion recognition model M2 is the target emotion recognition model at layer 2, when the emotion recognition result of the emotion recognition model M2 corresponds to the user emotion 1, the emotion recognition model M4 is determined as the target emotion recognition model at layer 3, and when the emotion recognition result of the emotion recognition model M2 corresponds to the user emotion 3, the emotion recognition model M5 is determined as the target emotion recognition model at layer 3. If the emotion recognition model M3 is the target emotion recognition model at layer 2, the emotion recognition model M5 is determined as the target emotion recognition model at layer 3 when the emotion recognition result of the emotion recognition model M3 corresponds to the emotion 2 of the user, and the emotion recognition model M6 is determined as the target emotion recognition model at layer 3 when the emotion recognition result of the emotion recognition model M3 corresponds to the emotion 3 of the user. And finally, determining the emotion recognition result of the target emotion recognition model on the 3 rd layer as the emotion recognition result of the brain wave analysis model.
In one possible embodiment, each emotion recognition model described above may be implementedFor a classification tree constructed based on a symbolic function, the formula of the classification tree may be: (x) sign (wx + b), where x ═ x1,x2,…,xh) Is the eigenvector corresponding to the electroencephalogram feedback information, h is the vector dimension of the eigenvector corresponding to the electroencephalogram feedback information, and w ═ w1,w2,…,xh) The weight parameter of the eigenvector corresponding to the brain wave feedback information in each vector dimension is b, and b is a bias parameter. The formula obtained by developing the above formula is f (x) sign (w)1x1+w2x2+…+whxh+ b). The two emotion recognition results of the emotion recognition model are 1 and-1, wherein 1 and-1 respectively represent two user emotions which can be recognized by the emotion recognition model, if the calculated emotion recognition result is 1, the user emotion is one of the user emotions which can be recognized by the emotion recognition model, and if the calculated result is-1, the user emotion is the other user emotion which can be recognized by the emotion recognition model.
For each emotion recognition model, a training sample can be obtained through training, for one emotion recognition model, brain wave samples corresponding to two user emotions (taking two user emotions as a user emotion S1 and a user emotion S2 as examples) recognizable by the emotion recognition model can be obtained, wherein feature extraction is performed on the brain wave samples corresponding to the user emotion S1 and the user emotion S2 respectively to obtain feature vector samples corresponding to the user emotion S1 and the user emotion S2 respectively, wherein the number of the brain wave samples and the number of the feature vector samples are both multiple, and then one feature vector sample corresponding to the user emotion S1 is used as an independent variable of the formula, namely x (x ═ x (x 1) is obtained1,x2,…,xh) Taking 1 as a dependent variable of the formula, namely f (x) (namely y), a training sample corresponding to the emotion of the user S1 is obtained, and each feature vector sample is processed in this way, so that a plurality of training samples corresponding to the emotion of the user S1 are obtained. And using a feature vector sample corresponding to the emotion S2 of the user as an argument of the above formula, that is, as x ═ x (x)1,x2,…,xh) And taking-1 as a dependent variable of the formula, namely f (x) (y), obtaining a training sample corresponding to the emotion of the user S2, and processing each feature vector sample according to the method to obtain a plurality of training samples corresponding to the emotion of the user S2. Then, training samples corresponding to the user emotion S1 and the user emotion S2 are mapped to a high-dimensional space, a hyperplane capable of completely distinguishing two types of elements (two elements with different y) is found in the multi-dimensional space, and parameter values corresponding to the hyperplane are determined as weight parameters and bias parameters in each dimension.
The emotion recognition method has the advantages that the edge computing device recognizes the emotion of the user through the multi-layer tree structure in the emotion classification model of the user, recognition logic is simple, compared with the method that each emotion recognition model of the user is operated once to determine the emotion recognition result, the emotion recognition result can be obtained only by operating part of the emotion models of the user through the tree structure, and operation efficiency can be improved under the condition that the number of types of the emotion of the user to be recognized is large.
In other possible implementations, the user emotion classification model may also be a model for classifying feature vectors corresponding to the electroencephalogram feedback information based on other structures or recognition logic to identify the emotion of the user. Specifically, the user emotion classification model may be a full-link-based classification model, for example, may be a multi-layer perceptron (MLP) classification model; still alternatively, the user emotion classification model may be a classification model based on a convolutional neural network, such as VGG; still alternatively, the user emotion classification model is a classification model based on a proximity algorithm, and may be, for example, a k-nearest neighbor (KNN) classification model or the like. And are not limited to the examples herein.
In the embodiment of the present application, the
In one possible implementation manner, the
Wherein, in determining the first feature vector, the second feature vector, and the third feature vector, the
Alternatively, the following steps may be performed: performing wavelet transformation and reconstruction on brain wave feedback information to obtain a wavelet decomposition coefficient and four rhythm waves (namely waves, theta waves, alpha waves and beta waves) of a brain wave signal, calculating wavelet energy and wavelet entropy according to the wavelet decomposition coefficient, and determining the wavelet energy and the wavelet entropy as a first feature vector; calculating approximate entropies of the four rhythm waves, and determining the approximate entropies of the four rhythm waves as a second feature vector; and calculating Hurst indexes of the four rhythm waves, and determining the Hurst indexes of the four rhythm waves as a third feature vector.
Alternatively, the following steps may also be performed: performing wavelet transformation and reconstruction on the brain wave feedback information to obtain a wavelet decomposition coefficient and four rhythm waves of a brain wave signal; calculating the energy characteristics of the brain wave feedback information through discrete Fourier transform, calculating wavelet energy and wavelet entropy according to wavelet decomposition coefficients, and determining the energy characteristics of the brain wave feedback information and the wavelet energy and the wavelet entropy as first characteristic vectors; calculating sample entropy of brain wave feedback information, calculating approximate entropies of the four rhythm waves, and determining the sample entropy and the approximate entropies as second feature vectors; and calculating the fractal characteristic of the brain wave feedback information through a Higuchi algorithm, calculating the hessian indexes of the four rhythm waves, and determining the fractal characteristic of the brain wave feedback information and the hessian index as a third eigenvector.
The first feature vector, the second feature vector and the third feature vector are not limited to those listed in the above embodiments, and the emotion of the user can be determined by extracting features of brainwave feedback information from multiple dimensions and combining multiple dimensions, so that the emotion recognition accuracy of the user can be improved. It should be understood that the more feature factors and feature extraction methods are considered in the feature extraction process, the more vector dimensions of the extracted feature vectors are, and the more accurate the identification can be.
Optionally, the
The
The
Specifically, the image quality parameter of the third virtual reality game screen may refer to parameters such as resolution, brightness, and color saturation of the third virtual reality game screen. For example, if the cloud server determines that the emotion of the user is a worry according to the emotion indication information of the user, the color saturation of the third virtual reality game picture is increased to relieve the worry emotion of the user. Or the cloud server determines that the emotion of the user is angry according to the emotion indication information of the user, and then the resolution of the third virtual reality game picture is increased. The matching relationship between the adjusted image quality parameter and the emotion of the user can be set according to the actual application condition of the virtual reality game, and the application is not limited.
In the system, the user emotion of the user on the VR game picture running on the VR device is analyzed according to the brain wave feedback information aiming at the VR game picture, and the indication information indicating the user emotion is carried in the prediction operation identifier which indicates the cloud server to generate the VR game picture, so that the cloud service can adjust the image quality parameters of the VR game picture when the VR game picture is generated, the image quality parameters are matched with the user emotion, in the subsequent VR game process, the image quality parameters can be matched with the user emotion, and the user experience of the user using the VR game is improved.
Optionally, before sending the second virtual reality game picture to the
In some possible embodiments, the VR device may further collect brainwave information of the user, analyze emotion of the user based on the brainwave information, and adjust operation parameters of a game screen of the VR game based on the emotion of the user.
The
For the description of the brainwave feedback information, reference may be made to the foregoing description, which is not repeated herein.
The user perspective information refers to information reflecting the current perspective of the user, which is collected by the
The
Reference is made to the preceding description for a specific implementation of the edge computing device for analyzing and determining the emotion of the user. The
The
In the system, the user emotion of the user on the VR game picture running on the VR device is analyzed according to the brain wave feedback information aiming at the VR game picture, and the virtual reality device is indicated according to the user emotion and the user visual angle to adjust the running parameters of the VR game picture, so that the VR game picture can adapt to the emotion and visual angle information of the user, and better user experience can be brought to the user.
Having described the system of the present application, to better carry out the method of the present application, the various devices in the system of the present application are described next.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a virtual reality device according to an embodiment of the present disclosure, where the virtual reality device 50 includes a
The
The
The
The
Referring to fig. 6, fig. 6 is a schematic structural diagram of an edge computing device according to an embodiment of the present disclosure, where the edge computing device 60 includes a processor 601, a memory 602, and a communication interface 603. The processor 601, the memory 602, and the communication interface 603 may be connected by a bus.
The processor 601 is configured to support the device to implement the functions implemented by the
The memory 602 is used for storing program codes and the like. The memory 602 may include Volatile Memory (VM), such as Random Access Memory (RAM); the memory 602 may also include a non-volatile memory (NVM), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); the memory 602 may also comprise a combination of memories of the kind described above.
The communication interface 603 is used for performing communication-related functions such as transmitting data, receiving data, and the like in cooperation with the processor 601.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a cloud server according to an embodiment of the present disclosure, where the cloud server 70 includes a processor 701, a memory 702, and a communication interface 703. The processor 701, the memory 702, and the communication interface 703 may be connected by a bus.
The processor 701 is configured to support a cloud server to implement the functions implemented by the
The memory 702 is used to store program codes and the like. The memory 702 may include Volatile Memory (VM), such as Random Access Memory (RAM); the memory 702 may also include a non-volatile memory (NVM), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); the memory 702 may also comprise a combination of the above types of memory.
The communication interface 703 is used for performing communication-related functions such as transmitting data and receiving data in cooperation with the processor 701.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:优化UI图集利用率的方法、装置及可读介质