User image drawing method based on VR emergency environment

文档序号:57127 发布日期:2021-10-01 浏览:51次 中文

阅读说明:本技术 一种基于vr应急环境下的用户画像方法 (User image drawing method based on VR emergency environment ) 是由 何高奇 王长波 张嘉文 毛羽霞 周黎 于 2021-06-28 设计创作,主要内容包括:本发明公开了一种基于VR应急环境下的用户画像方法,其特点是采用VR游戏系统获取用户在应急环境中的行为数据和大五人格信息,根据用户游戏行为和预设的标签词库,确定用户的游戏行为标签,并以大五人格信息获取预测模型训练的标签进行训练预设的机器学习模型,生成以大五人格分数呈现的应急场景下的用户画像。本发明与现有技术相比具有通过收集VR游戏中用户的行为数据,生成对应的预设标签,确定用户最终的用户画像,较好的反应了用户在应急情况下的行为表现,系统简单,使用方便,省时省力,成本低,为测量用户大五人格提供了一种不基于问卷量表的方式生成用户画像的解决方式。(The invention discloses a user image drawing method based on a VR (virtual reality) emergency environment, which is characterized in that a VR game system is adopted to obtain behavior data and five-personality information of a user in the emergency environment, game behavior labels of the user are determined according to game behaviors of the user and a preset label word bank, the five-personality information is used to obtain labels trained by a prediction model to train a preset machine learning model, and a user image under an emergency scene presented by five-personality scores is generated. Compared with the prior art, the method has the advantages that the corresponding preset label is generated by collecting the behavior data of the user in the VR game, the final user portrait of the user is determined, the behavior of the user in an emergency situation is reflected well, the system is simple, the use is convenient, the time and the labor are saved, the cost is low, and a solution for generating the user portrait without a questionnaire table is provided for measuring the five personality of the user.)

1. A user image drawing method based on a VR (virtual reality) emergency environment is characterized in that a VR game system is adopted to obtain behavior data and five-personality information of a user in the emergency environment, game behavior labels of the user are determined according to game behaviors of the user and a preset label word bank, labels for model training are obtained according to the five-personality information, a preset machine learning model is trained, and a user image under an emergency scene presented by five-personality scores is generated, wherein the VR game system is composed of a game terminal, an interaction module, a data collection module and a prediction module; the game terminal is a display module which presents the user image to the user in a visual mode for the interactive behavior and scene of the user; the interaction module provides the interaction between the user and the game object, including the interaction between the walking control of the player and the manipulation of the object by the player in the game; the data collection module captures walking data, time data, manipulated object data and escape modes; the prediction module generates user characteristics from the collected user game behavior data, uses training data of the user characteristics for predicting the user portrait and sends the user portrait to the game terminal for displaying.

2. The method for imaging users in VR-based emergency environments of claim 1, wherein the preset machine learning model is five decision tree models, the feature values are input into the five decision tree models as training data, scores of five dimensions of five figures are fitted as accurate values, the scores range from 1 to 10, and the feature values are labeled as feature values in a numerical order of 0 to 4 after the features determined by the attributes of the users are analyzed by the csv file.

3. The VR-based emergency environment user imaging method of claim 1, wherein the walking data is a walking path of a game character in a game scene, wherein the walking path is composed of a series of three-dimensional spatial points; the time data is the time for capturing the position of each current player of the game character in the game scene and the time for carrying out interactive operation; the manipulated object data is the name of a manipulated object captured when a game character interacts with an interactively operable object in a game scene; the escape mode is the escape mode selected by the player when the player escapes from the fire scene of the game.

4. The method of claim 1, wherein the user game behavior data includes a user's walking trajectory and interaction with a game object.

5. The method of claim 1, wherein the user representation in the display module is a five-dimensional score value of five people.

6. The VR-based emergency environment user imaging method of claim 1, wherein the interactive operation of the interactive module comprises: interactive operation of opening/closing a door, interactive operation using a fire extinguisher, interactive operation of opening/closing a window, interactive operation taking an elevator, and detection of whether a player selects an escape mode, upon which game is ended.

7. The VR-based emergency environment user imaging method of claim 1, wherein the player walking control is a player controlling a character's walking in a game scene by manipulating a VR handle controller; the interaction of the player for manipulating the object is that the player operates with an interactive game object in a game scene.

Technical Field

The invention relates to the technical field of computer human-computer interaction, in particular to a user image drawing method based on a VR emergency environment.

Background

Personality descriptions are made up of statements about behavior patterns that are stable over time and over the course of a scene. The personality is embodied in a behavior mode, and the working preference, style habits and the like of the user can be predicted in advance to a certain extent by knowing the personality. Under the emergency evacuation environment, the user can know the personality characteristics of the user in advance before a disaster comes, and can select a better escape mode temporarily when the disaster comes by combining the personality characteristics of the user. In the field of psychology, personality measures are usually performed by means of questionnaires, which are time-consuming and do not reflect well the behavior of the user in case of an emergency.

In the computer field, personality measures are generally acquired in a big data-based and video-based manner. Big data based approaches typically collect personal information about a user's account on a social network such as: personal avatars, personal favorite pictures, shopping history records, browsing records and the like, and the personality characteristics of the user are modeled and predicted through collected information, and the technology is generally applied to recommendation applications. And the facial features of the human face in the captured video are captured for prediction in a video-based mode, so that the technology is commonly used for talent recruitment to acquire more personal information about a recruiter for an enterprise.

The information collected by the prior art is modeled to predict the personality characteristics of the user, and the application of the information modeling does not relate to the measurement mode of the behavior angle in the emergency evacuation scene.

Disclosure of Invention

The invention aims to provide a user portrait method based on VR emergency environment aiming at the defects of the prior art, which adopts VR equipment with immersive perception to create an emergency scene, collects behavior data in the created scene to generate a corresponding preset label, establishes an index and predicts a personality through a machine learning model established by a system to form a user portrait in the emergency scene, can well reflect the behavior of a user under the emergency condition, is simple in system, convenient to use, time-saving and labor-saving, low in input cost, well solves the problems that the questionnaire survey acquires the personality and consumes time and labor, and cannot well reflect the behavior of the user under the emergency condition, and provides a solution for generating the user portrait without being based on a questionnaire scale for measuring the five-personality of the user.

The purpose of the invention is realized as follows: a user image method based on VR emergency environment is characterized in that a VR game system is adopted to obtain behavior data and five-personality information of a user in the emergency environment, game behavior labels of the user are determined according to game behaviors of the user and a preset label word bank, labels trained by a prediction model are obtained according to the five-personality information to train a preset machine learning model, and a user image under an emergency scene presented by five-personality scores is generated, wherein the VR game system is composed of a game terminal, an interaction module, a data collection module and a prediction module; the game terminal is the interactive behavior and scene of the user and a display module which presents the user portrait to the user in a visual mode; the interaction module provides the interaction between the user and the game object, including the interaction between the walking control of the player and the manipulation of the object by the player in the game; the data collection module captures walking data, time data, manipulated object data and escape modes; the prediction module generates user characteristics from the collected user game behavior data, uses training data of the user characteristics for predicting the user portrait and sends the user portrait to the game terminal for displaying.

The behavior data comprises trajectory data of the user in the VR game scene, interaction with objects in the game scene and an escape mode selected by the user.

The user behavior data is used for representing the attribute of the user, and the attribute characteristic of the user is determined according to the behavior data of the user.

The label trained by the prediction model obtains the label trained by the prediction model according to the personality information of the user, and trains a preset machine learning model according to the attribute of the user and the label of the prediction model.

The emergency scene is a scene of a fire disaster in a teaching building, and a user needs to control the VR handle to escape.

The determining, by the behavior data of the user, the attribute of the user specifically includes: and collecting the escape time, the walking path and the interactive operation and selected escape way with other objects of the user in the stress evacuation scene.

The escape time is specifically: the escape time is divided into slow escape, medium escape and fast escape according to the actual escape time.

The walking path specifically comprises: the escape route is divided into a highly targeted escape and an unspecified targeted escape according to the overlapping point of the escape route.

The interactive operation with other objects is specifically as follows: the use and non-use of fire extinguishers are classified according to the interaction between the user and the game object.

The selected escape mode specifically comprises the following steps: the method is divided into a jump window escape method, a fire extinguisher fire extinguishing method, an elevator escape method, a safety exit escape method and a common channel escape method according to the escape mode of a user.

The obtaining of the label of the prediction model training according to the personality information of the user specifically includes: and collecting the five-person questionnaire to be tested, and obtaining the score on each dimension, namely the label of the data set.

The preset machine learning model is a decision tree model, the characteristics determined by the attributes of the user are used as the input of a training set, and the five-personality score of the user is used as the label of the training set.

Compared with the prior art, the method has the advantages that the corresponding preset label is generated by collecting the behavior data of the user in the VR game, the final user portrait of the user is determined, the behavior of the user in an emergency situation is reflected well, the system is simple, the use is convenient, the time and the labor are saved, the cost is low, and a solution for generating the user portrait without a questionnaire table is provided for measuring the five personality of the user.

Drawings

Fig. 1 is a system framework diagram for acquiring a user image in an emergency evacuation scene according to embodiment 1;

fig. 2 is a system framework diagram for obtaining a user image in an emergency evacuation scene according to embodiment 2.

Detailed Description

The following is further detailed by the specific embodiments:

example 1

Referring to fig. 1, the system for acquiring user images in an emergency evacuation scene based on VR comprises: a game function system and a game terminal. The game function system includes: an interaction module for interacting with the game object during use by the user, a data collection module for game data generated during use by the user, and a prediction module for ultimately generating a representation of the user. The emergency evacuation scene is developed by adopting a Unity2019.3.15f1 version, wherein the cloud compiles an interaction module, a data collection module and a prediction module by using a C # script; the interaction module is used for controlling the walking direction of a player, controlling the walking speed of the player and controlling the player to walk with an object; the speed scripts for controlling the walking direction and the walking of the player are mounted on the player, and the walking direction of the player is obtained by obtaining the Axis value of the touch pad of the VR handle operated by the player; and judging whether the player presses the key A or not by the walking speed script, wherein the player walks quickly when pressing the key A, or walks slowly and the walking speed is set to be 2m/s and 5m/s respectively corresponding to the slow walking and the fast walking.

And mounting the script operated by the object on the corresponding object, wherein the objects capable of interacting comprise a door, a fire extinguisher, a window and the like. Wherein the script mounted on the door is specifically implemented to open and close the door; the script mounted on the window is specifically realized by opening and closing the window or jumping the window to escape; the script implementation mounted on the fire extinguisher may be used by a player to extinguish a fire.

Specifically, a hollow object is mounted on the door body, and the hollow object is positioned on the shaft of the door, and a box-shaped bounding box is mounted. When a player enters the bounding box, detecting collision and detecting whether a handle of the player presses a Trigger key; when a player presses the Trigger key, the empty object is rotated by 90 degrees to drive the door to rotate, and the door is opened and closed.

An empty object is mounted on the window, and the position of the empty object is placed on the shaft of the window, and a box-type bounding box is mounted. When a player enters the bounding box, detecting collision and detecting whether a handle of the player presses a Trigger key; when a player presses a Trigger key, the empty object is rotated by 90 degrees to drive the window to rotate, so that the window is opened and closed; when the player is detected to press the B key, the jump window escape is triggered.

Two empty objects are mounted on the fire extinguisher, one object is at the hand-held position of the fire extinguisher, and the other object is at the nozzle of the fire extinguisher. A box-type enclosure is mounted in a hand-held position of the fire extinguisher. When the handle of the player enters the bounding box, detecting collision and detecting whether the handle of the right hand of the player presses a skip key or not; when a player presses a Grip key, the player is endowed with a hand-held position and a left-hand position of the player, and when the Grip key is pressed again, the fire extinguisher is put down to realize the taking up and putting down of the fire extinguisher; when the fire extinguisher is in a take-up state, whether a Trigger key is pressed down by a right-hand handle of a player is detected, and when the Trigger key is pressed down by the player, the nozzle position emits white particles which are the spray of the fire extinguisher.

Further, when the spray of the fire extinguisher touches the flame, the collision of the flame is detected, and if the flame is white particles, the number of the particles of the flame is reduced, so that the effect of gradually extinguishing the fire is achieved.

The data collection module comprises data for capturing walking data of a user, data for escaping time, data for manipulating objects and an escaping mode. Specifically, data for capturing walking of the user is mounted on an observation node arranged at each corner of a corridor and at the center of the corridor of each floor. When a user triggers the observation node, the script mounted on the observation node starts to write the triggered track into a local csv file for storage.

And mounting the data of the escape time on a game manager, wherein the game manager records the process from the beginning of the user entering the game to the end of the escape for controlling the empty object arranged for the running of the whole game. When the game is triggered to end, the end time is recorded and written into the csv file to be stored locally.

The method comprises the steps that a script for obtaining data of a manipulation object is mounted on an object capable of interacting, the value is a Boolean value, when a user uses the object capable of interacting, a trigger is set to be true, and when the game is finished, a csv file is written into the game and stored in the game.

The objects that can interact include: doors, windows, elevator buttons, fire extinguishers; the escape modes include jumping windows to escape, using a fire extinguisher to extinguish fire, taking an elevator to escape, escaping from a safety exit, escaping from a common channel and the like. When the user triggers the corresponding escape mode, the script or the observation node on the corresponding interactive object records and writes the csv file into the local.

The prediction module comprises generated user characteristics and training data, wherein the generated user characteristics are read from a local csv file and processed; the processing is characterized by escape time, walking paths, interactive operation with other objects, selected escape modes and the like. Specifically, the escape time is divided into three conditions of slow escape, medium-speed escape and fast escape, wherein the escape within 100s is set as fast escape, the escape within 150s is set as medium-speed escape, and the escape more than 150s is set as slow escape.

The walking path comprises a strong-purpose escape and a weak-purpose escape, specifically, when the number of times of triggering of the same observation node is more than 3 times, the escape is defined as the weak-purpose escape, otherwise, the strong-purpose escape is defined.

The interactive operation with other objects is divided into using a fire extinguisher and not using the fire extinguisher, and the selected escape modes comprise jumping from a window, using the fire extinguisher to extinguish a fire, taking an elevator to escape, escaping from a safety exit, escaping from a common channel and the like; the escape mode is determined by the observation node placed at the escape exit and the interactive game object.

Further, the escape from the safe exit and the escape from the common channel are obtained according to the observation nodes, when the player touches the observation node closer to the safe exit, the player is regarded as the escape from the safe exit, and when the player touches the observation node closer to the common channel, the player is regarded as the escape from the common channel. When a player touches flames by white particles emitted by a fire extinguisher, the player is regarded as the fire extinguisher to extinguish the fire, and the written csv file is stored locally; when a player touches an elevator button by taking an elevator, the player starts to escape by taking the elevator, writes a csv file into the local; when a player opens the window and presses the B key, the player starts to jump the window to escape, writes the csv file into the local area and stores the csv file into the local area.

After all the characteristics are analyzed through the csv file, numbers marked from 0 to 4 are used as characteristic values in sequence. And the script for processing the training data takes the processed characteristic value as the input of the training data and takes the locally stored scores of five dimensions of the five-dimension of the user as accurate values.

Specifically, the trained model is five decision tree models, and the characteristic values are respectively input into the five decision tree models to fit specific scores on each dimension of the five figures, wherein the scores range from 1 to 10.

The training data is divided into four attributes: the escape time, the walking path, the interactive operation with other objects and the selected escape mode are divided into 3, 2 and 5 categories respectively according to each attribute. And filling data obtained from a game scene in a mode of attributes and categories, wherein the data are respectively used as the input of five decision tree models, and five dimensions 1-10 of the tested five-personality score are respectively used as the label of each decision tree.

The game terminal comprises a game scene display module which is responsible for displaying the scene of the game, wherein the game scene is built by a Unity2019.3.15f1 version, the emergency evacuation scene is designed into a teaching building with 3 floors, the building shape is square, and the emergency scene is a fire scene. The teaching building comprises a fire extinguisher, a safe escape passage, a common passage and an elevator. The application program runs on VR terminal equipment, the selected terminal equipment is HTC VIVE COSMOS, and a user can carry out interactive operation in the game by wearing the head-mounted display and the handle.

Example 2

Referring to fig. 2, this embodiment is different from embodiment 1 in that a game function system has an increased number of prediction modules and a game terminal has an increased number of user image display modules as compared with embodiment 1, and the other points are substantially the same as embodiment 1, but are not described in any detail. The generation of the portrait is realized by adding a new realization on the premise that five decision tree models are trained in the embodiment 1; the generated portrait is represented in a prediction score value of 1-10 points represented by five decision tree models; and the user portrait display module is used for finally displaying the five-dimension scores of the user, and when the game is finished, the five-dimension scores of the five-dimension are acquired from the prediction module in the game function system and are displayed on the VR equipment of the user.

The above examples are only for further illustration of the present invention and are not intended to limit the present invention, and all equivalent implementations of the present invention should be included within the scope of the claims of the present invention.

7页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于云游戏的互动方法、装置、电子设备及可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类