Interaction method, system, terminal and VR (virtual reality) equipment based on VR dead person simulation

文档序号:192796 发布日期:2021-11-02 浏览:42次 中文

阅读说明:本技术 基于vr逝者仿真的交互方法、系统、终端及vr设备 (Interaction method, system, terminal and VR (virtual reality) equipment based on VR dead person simulation ) 是由 吴浩诚 于 2021-07-21 设计创作,主要内容包括:本发明的基于VR逝者仿真的交互方法、系统、终端及VR设备,不仅可以根据个人情况定制专属于自己的逝者生活虚拟场景,还可以根据呈现在VR设备的虚拟图像通过语音直接对逝者人物模型进行直接调整,以获得更真实的逝者人物形象,还可以根据用户的表情变化自动控制逝者人物模型作出符合用户心理需求的互动动作;本方案不仅可以使虚拟画面更真实,并且可以满足不同用户的需求,增强了交互性以及提高了沉浸感,并且以每一位逝者亲人的心理需求为基础,在熟悉的场景与已经失去的亲人相见,呈现出最能治愈人心的VR虚拟现实影像,进而减少因丧亲而导致的心理障碍或心理疾病,使用户尽早摆脱心理创伤。(According to the interaction method, the interaction system, the interaction terminal and the VR equipment based on VR deceased person simulation, the living virtual scene of the deceased person specially owned by the user can be customized according to personal conditions, the deceased person character model can be directly adjusted through voice according to the virtual image presented on the VR equipment, so that a more real deceased person character image can be obtained, and the deceased person character model can be automatically controlled to make interaction actions according with the psychological needs of the user according to the expression changes of the user; the scheme can ensure that the virtual picture is more real, can meet the requirements of different users, enhances the interactivity and improves the immersion feeling, and presents VR virtual reality images which can cure people best when meeting lost relatives in familiar scenes on the basis of the psychological needs of the relatives of each departed user, thereby reducing the psychological disorders or psychological diseases caused by loss of relatives and enabling the users to get rid of psychological trauma as soon as possible.)

1. An interaction method based on VR deceased simulation, which is applied to a VR device worn by a user, and comprises the following steps:

constructing an departed virtual customized life scene corresponding to an departed real life scene based on a panoramic image of the departed real life scene;

constructing an departed person initialized character model corresponding to the character image based on the character image of the departed person, and generating a virtual picture corresponding to the departed person initialized character model in the departed person virtual customized life scene;

according to the collected voice information from the user, updating the deceased person initial character model in the virtual picture for one time or more times to obtain a virtual picture corresponding to a final character model of the deceased in the virtual customized living scene of the deceased;

and controlling the dead person final character model in the virtual picture to execute one or more interactive actions according with the psychological needs of the user according to the user expression images acquired in real time.

2. The VR-deceased simulation based interaction method of claim 1, wherein constructing a virtual customized life scene corresponding to the deceased real life scene based on the panoramic image of the deceased real life scene comprises:

extracting living scene characteristic information of a panoramic image of the real living scene of the departed;

inputting the feature information of the life scene into a three-dimensional scene construction model to obtain a virtual customized life scene of the departed person corresponding to the real life scene of the departed person;

wherein the deceased real-life scene includes: one or more of an departed home environment scene, an departed work environment scene and an departed human-interaction environment scene.

3. The VR deceased simulation based interaction method of claim 1, wherein the constructing an deceased initiating character model corresponding to the character image based on the image of the character before deceased and generating a virtual frame corresponding to the model of the deceased initiating character in the virtual customized living scene of the deceased comprises:

extracting one or more character feature information of a character image of the deceased;

inputting characteristic information of each character into a virtual character construction model to obtain an departed initial character model; wherein the deceased initial character model includes: one or more initial character characteristic information;

implanting the departed inventor initial character model into the departed inventor virtual customized life scene, and generating a virtual picture corresponding to the departed inventor initial character model in the departed inventor virtual customized life scene for a user to watch on a VR device.

4. The VR deceased simulation based interaction method of claim 1 or 3, wherein the one or more updates of the deceased initial character model in the virtual frame to obtain a virtual frame corresponding to the deceased final character model in the deceased virtual custom life scene according to the collected voice information from the user comprises:

respectively carrying out voice recognition on one or more pieces of collected voice information from a user to obtain voice recognition information corresponding to each piece of voice information;

based on each voice recognition information, character model updating information corresponding to each voice recognition information is respectively obtained;

updating an deceased initial character model in the virtual picture one or more times based on the character model update information;

wherein the deceased initial character model includes: one or more initial character characteristic information;

generating a virtual picture corresponding to an evanescent person final character model in the virtual customized living scene of the evanescent person based on the evanescent person initial character model in the virtual picture after being updated for one or more times; wherein the final model of the deceased person comprises: one or more final character feature information.

5. The VR-deceased simulation-based interaction method of claim 4, wherein the updating the deceased initial character model in the virtual frame one or more times based on the character model update information comprises:

and updating the character characteristic information of the deceased initial character model in the virtual picture one or more times respectively based on character characteristic updating information corresponding to the character characteristic information in the character model updating information respectively.

6. The VR deceased simulation based interaction method of claim 1, wherein the manner of controlling the deceased final character model in the virtual frame to perform one or more interactive actions according to user psychological needs according to the real-time collected user expression image information comprises:

respectively identifying one or more user expression images acquired in real time to obtain user expression identification information corresponding to each user expression image;

acquiring action control information corresponding to each user expression image based on the user expression identification information;

based on the motion control information, controlling an deceased final character model in the virtual picture to perform one or more interactive motions corresponding to the motion control information, the interactive motions meeting user psychological needs.

7. The VR dead-end simulation based interaction method of claim 1, further comprising:

and acquiring background music corresponding to the final character model of the deceased to perform one or more interactive actions, and playing the background music.

8. An interactive system based on VR deceased simulation, applied to a VR device worn by a user, the system comprising:

the virtual scene construction module is used for constructing an departed virtual custom life scene corresponding to the departed real life scene based on a panoramic image of the departed real life scene;

the initial character model building module is connected with the virtual scene building module and used for building an departed person initialization character model corresponding to the character image based on the character image before the departed person comes and generating a virtual picture corresponding to the departed person initial character model in the virtual customized living scene of the departed person;

the voice updating character model module is connected with the initial character model building module and used for updating the deceased initial character model in the virtual picture for one or more times according to collected voice information from the user so as to obtain a virtual picture corresponding to the final character model of the deceased in the virtual customized living scene of the deceased;

and the expression control interaction module is connected with the voice updating character model module and is used for controlling the final character model of the deceased in the virtual picture to execute one or more interaction actions according with the psychological needs of the user according to the user expression images acquired in real time.

9. An interactive terminal based on VR dead person simulation is characterized in that, is applied to VR equipment that the user wore, includes:

a memory for storing a computer program;

a processor to perform the VR deceased simulation based interaction method of any one of claims 1 to 7.

10. A VR device, comprising:

the VR dead-based simulated interaction terminal of claim 9;

the life scene image receiving device, the character image receiving device, the user voice acquisition device and the user expression image acquisition device are respectively connected with the terminal;

the life scene image receiving device is used for receiving a panoramic image of a real life scene of the departed saint;

the character image receiving device is used for receiving a character image of the dead person;

the user voice acquisition device is used for acquiring voice information from a user;

the user expression image acquisition device is used for acquiring user expression images in real time.

Technical Field

The invention belongs to the field of artificial intelligence, and particularly relates to an interaction method, an interaction system, an interaction terminal and VR equipment based on VR dead person simulation.

Background

Investigation has shown that the death of one relatives at least has a negative psychological impact on tens of people around relatives and friends. In the case of a middle-aged person, his or her death may affect at least tens or even hundreds of parents, children, brothers, friends and friends, and colleague neighbors of two families, thereby constituting a huge special group that cannot be ignored. Studies have shown that, in general, approximately 1/4 family members develop "post-traumatic stress disorder (PTSD)" after losing their relatives.

With the progress of internet technology, the way of human communication gradually moves toward the virtual reality era. Virtual Reality (VR) technology can provide a more realistic three-dimensional environment for a user, completely immersing the user in the VR. At present, many VR applications that we experience now are applied to industries such as games, real estate, movies, education, medical treatment and the like, but there is no VR device which can be customized and obtained for each user according to the psychological needs of each deceased person and is based on virtual reality images of deceased person simulation, and the images can be used for curing the minds of the users.

Disclosure of Invention

In view of the above disadvantages of the prior art, an object of the present invention is to provide an interaction method, system, terminal and VR device based on VR deceased person simulation, which are used to solve the problems in the prior art that there is no psychological need for each deceased person, and VR devices based on VR virtual reality images capable of curing the mind of the deceased person can be customized and obtained for each user.

To achieve the above and other related objects, the present invention provides an interaction method based on VR deceased simulation, applied to a VR device worn by a user, the method including: constructing an departed virtual customized life scene corresponding to an departed real life scene based on a panoramic image of the departed real life scene; constructing an departed person initialized character model corresponding to the character image based on the character image of the departed person, and generating a virtual picture corresponding to the departed person initialized character model in the departed person virtual customized life scene; according to the collected voice information from the user, updating the deceased person initial character model in the virtual picture for one time or more times to obtain a virtual picture corresponding to a final character model of the deceased in the virtual customized living scene of the deceased; and controlling the dead person final character model in the virtual picture to execute one or more interactive actions according with the psychological needs of the user according to the user expression images acquired in real time.

In an embodiment of the invention, the constructing of the virtual customized living scene of the departed saint corresponding to the real living scene of the departed saint based on the panoramic image of the real living scene of the departed saint includes: extracting living scene characteristic information of a panoramic image of the real living scene of the departed; inputting the feature information of the life scene into a three-dimensional scene construction model to obtain a virtual customized life scene of the departed person corresponding to the real life scene of the departed person; wherein the deceased real-life scene includes: one or more of an departed home environment scene, an departed work environment scene and an departed human-interaction environment scene.

In an embodiment of the invention, the constructing an departmental initialized character model corresponding to the character image based on the image of the character before the departmental, and the generating of the virtual picture corresponding to the departmental initialized character model in the virtual customized living scene of the departmental comprises: extracting one or more character feature information of a character image of the deceased; inputting characteristic information of each character into a virtual character construction model to obtain an departed initial character model; wherein the deceased initial character model includes: one or more initial character characteristic information; implanting the departed inventor initial character model into the departed inventor virtual customized life scene, and generating a virtual picture corresponding to the departed inventor initial character model in the departed inventor virtual customized life scene for a user to watch on a VR device.

In an embodiment of the invention, the updating the deceased person initial character model in the virtual picture one or more times according to the collected voice information from the user to obtain the virtual picture corresponding to the deceased final character model in the deceased virtual customized living scene comprises: respectively carrying out voice recognition on one or more pieces of collected voice information from a user to obtain voice recognition information corresponding to each piece of voice information; based on each voice recognition information, character model updating information corresponding to each voice recognition information is respectively obtained; updating an deceased initial character model in the virtual picture one or more times based on the character model update information; wherein the deceased initial character model includes: one or more initial character characteristic information; generating a virtual picture corresponding to an evanescent person final character model in the virtual customized living scene of the evanescent person based on the evanescent person initial character model in the virtual picture after being updated for one or more times; wherein the final model of the deceased person comprises: one or more final character feature information.

In an embodiment of the present invention, the updating the deceased initial character model in the virtual picture one or more times based on the character model update information includes: and updating the character characteristic information of the deceased initial character model in the virtual picture one or more times respectively based on character characteristic updating information corresponding to the character characteristic information in the character model updating information respectively.

In an embodiment of the present invention, the manner for controlling the final character model of the deceased in the virtual frame to perform one or more interactive actions according to the psychological needs of the user according to the real-time collected information of the facial expressions and images of the user includes: respectively identifying one or more user expression images acquired in real time to obtain user expression identification information corresponding to each user expression image; acquiring action control information corresponding to each user expression image based on the user expression identification information; based on the motion control information, controlling an deceased final character model in the virtual picture to perform one or more interactive motions corresponding to the motion control information, the interactive motions meeting user psychological needs.

In an embodiment of the present invention, the method further includes: and acquiring background music corresponding to the final character model of the deceased to perform one or more interactive actions, and playing the background music.

To achieve the above and other related objects, the present invention provides an interactive system based on VR deceased simulation, applied to a VR device worn by a user, the system including: the virtual scene construction module is used for constructing an departed virtual custom life scene corresponding to the departed real life scene based on a panoramic image of the departed real life scene; the initial character model building module is connected with the virtual scene building module and used for building an departed person initialization character model corresponding to the character image based on the character image before the departed person comes and generating a virtual picture corresponding to the departed person initial character model in the virtual customized living scene of the departed person; the voice updating character model module is connected with the initial character model building module and used for updating the deceased initial character model in the virtual picture for one or more times according to collected voice information from the user so as to obtain a virtual picture corresponding to the final character model of the deceased in the virtual customized living scene of the deceased; and the expression control interaction module is connected with the voice updating character model module and is used for controlling the final character model of the deceased in the virtual picture to execute one or more interaction actions according with the psychological needs of the user according to the user expression images acquired in real time.

To achieve the above and other related objects, the present invention provides an interactive terminal based on VR deceased simulation, comprising: a memory for storing a computer program; a processor for performing the interaction method based on VR dead-end simulation.

To achieve the above and other related objects, the present invention provides a VR device comprising: the interaction terminal based on VR dead person simulation; the life scene image receiving device, the character image receiving device, the user voice acquisition device and the user expression image acquisition device are respectively connected with the terminal; the life scene image receiving device is used for receiving a panoramic image of a real life scene of the departed saint; the character image receiving device is used for receiving a character image of the dead person; the user voice acquisition device is used for acquiring voice information from a user; the user expression image acquisition device is used for acquiring user expression images in real time.

As described above, the invention is an interaction method, system, terminal and VR device based on VR dead person simulation, and has the following beneficial effects: according to the method, firstly, the virtual customized living scene of the departed saint and the departed saint initialized character model are constructed through the panoramic image of the real living scene of the departed saint and the character image of the departed saint before the departed saint, a user can directly adjust the departed saint initialized character model in the virtual image through voice, and then one or more interactive actions meeting the psychological needs of the user are executed through the character model controlled according to the expression image of the user. The method can customize the life virtual scene of the deceased specially owned by the user according to personal conditions, can directly adjust the deceased character model through voice according to the virtual image presented on the VR equipment so as to obtain a more real image of the deceased character, and can automatically control the deceased character model to make an interactive action according with the psychological needs of the user according to the expression change of the user; the scheme can ensure that the virtual picture is more real, can meet the requirements of different users, enhances the interactivity and improves the immersion feeling, and presents VR virtual reality images which can cure people best when meeting lost relatives in familiar scenes on the basis of the psychological needs of the relatives of each departed user, thereby reducing the psychological disorders or psychological diseases caused by loss of relatives and enabling the users to get rid of psychological trauma as soon as possible.

Drawings

Fig. 1 is a schematic flow chart of an interaction method based on VR experiential simulation according to an embodiment of the present invention.

Fig. 2 is a schematic structural diagram of an interaction system based on VR expedition according to an embodiment of the present invention.

Fig. 3 is a schematic structural diagram of an interaction terminal based on VR expedition according to an embodiment of the present invention.

Fig. 4 is a schematic structural diagram of a VR device in an embodiment of the invention.

Detailed Description

The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.

It is noted that in the following description, reference is made to the accompanying drawings which illustrate several embodiments of the present invention. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present invention. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present invention is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "over," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.

Throughout the specification, when a part is referred to as being "connected" to another part, this includes not only a case of being "directly connected" but also a case of being "indirectly connected" with another element interposed therebetween. In addition, when a certain part is referred to as "including" a certain component, unless otherwise stated, other components are not excluded, but it means that other components may be included.

The terms first, second, third, etc. are used herein to describe various elements, components, regions, layers and/or sections, but are not limited thereto. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the scope of the present invention.

Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions or operations are inherently mutually exclusive in some way.

The embodiment of the invention provides an interaction method based on VR deceased person simulation. The method can customize the life virtual scene of the deceased specially owned by the user according to personal conditions, can directly adjust the deceased character model through voice according to the virtual image presented on the VR equipment so as to obtain a more real image of the deceased character, and can automatically control the deceased character model to make an interactive action according with the psychological needs of the user according to the expression change of the user; the scheme not only can ensure that the virtual picture is more real, but also can meet the requirements of different users, enhances the interactivity and improves the immersion sense, and presents VR virtual reality images which can cure people best on the basis of the psychological needs of relatives of each departed user and the familiar scenes of the departed relatives, thereby reducing the psychological disorders or psychological diseases caused by loss of relatives, enabling the users to get rid of psychological trauma as early as possible and solving the problems in the prior art.

Embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that those skilled in the art can easily implement the embodiments of the present invention. The present invention may be embodied in many different forms and is not limited to the embodiments described herein.

As shown in fig. 1, a schematic flow chart of an interaction method based on VR deceased simulation in the embodiment of the present invention is shown.

Applied to a VR device worn by a user, the method comprising:

step S11: and constructing an departed virtual customized life scene corresponding to the departed real life scene based on the panoramic image of the departed real life scene.

Optionally, the constructing a virtual customized living scene of the departed saint corresponding to the real living scene of the departed saint based on the panoramic image of the real living scene of the departed saint includes:

extracting living scene characteristic information of a panoramic image of the real living scene of the departed; specifically, life scene feature information of a panoramic image of the real life scene of the departed user is extracted based on a feature extraction model; the feature extraction model is obtained by training a plurality of panoramic images in a training set and corresponding living scene feature information. The living scene feature information may be feature information such as indoor or outdoor scene arrangement features, environmental features, and weather features, which is not limited herein. It should be noted that the panoramic image may be a picture or a video.

Inputting the feature information of the life scene into a three-dimensional scene construction model to obtain a virtual customized life scene of the departed person corresponding to the real life scene of the departed person; the three-dimensional scene construction model is obtained by training a plurality of virtual life scenes in a training set and life scene characteristic information corresponding to the virtual life scenes. It should be noted that the life scene characteristic information may be virtual reality scene making software, and the life scene characteristic information is input into the software, so that a virtual customized life scene of an departed user can be obtained;

wherein the deceased real-life scene includes: the method comprises one or more of scenes such as a departed home environment scene, a departed working environment scene and a departed human-interaction environment scene. The deceased parent-person interactive environment scene can be an indoor or outdoor interactive environment, and is mainly characterized in that a user who needs to be customized selects a scene that the deceased parent most thinks of, for example, a daughter wants to see an elapsed dad at a school gate, and then the scene selects the school gate, so that psychological needs of the user can be met.

Optionally, the panoramic image based on the real life scene of the deceased can be a 3D panoramic environment image generated by scanning the space of the deceased by using a 3D scanner.

Step S12: based on the image of the character before the deceased, an deceased person initialization character model corresponding to the image of the character is constructed, and a virtual picture corresponding to the deceased person initialization character model in the virtual customized living scene of the deceased is generated.

Optionally, the constructing an departmental person initialized character model corresponding to the character image based on the image of the character before the departmental person and generating a virtual picture corresponding to the departmental person initialized character model in the virtual customized living scene of the departmental person includes:

extracting one or more character feature information of a character image of the deceased; specifically, one or more pieces of character feature information of a character image of the deceased are extracted based on a character feature extraction model; the character feature extraction model is obtained by training a plurality of character images in a training set and character feature information corresponding to the character images. Wherein the character feature information includes: the facial characteristics comprise characteristics of various organs, skin color characteristics, and detail characteristics such as moles on the face in order to achieve better authenticity; stature characteristics still include: height, weight, body proportion, and external organs (shoulder width, arms, legs, etc.) of the body; the wear feature further comprises: clothing features (style and color), pants, shoes, hats, eyewear and jewelry. The expression features can be various detail expression features, which are not described herein; the expressive features may include: joy, anger, sadness and the like. The sound feature information includes: characteristic information such as sound frequency, tone, and timbre; the image of the person may be a picture or a video.

Inputting characteristic information of each character into a virtual character construction model to obtain an departed initial character model; wherein the deceased initial character model includes: one or more initial character characteristic information; specifically, inputting the feature information of each character into a virtual character construction model to obtain an departed from initial character model with initial character feature information corresponding to the feature information of each character; the virtual character building model is obtained by training a plurality of character models in a training set and character characteristic information corresponding to the character models.

Implanting the departed user initial character model into the departed user virtual customized life scene, and generating a virtual picture corresponding to the departed user initial character model in the departed user virtual customized life scene for a user to watch on a VR device.

Optionally, based on an image synthesis technology, the dead person initial character model is synthesized into a virtual customized living scene of the dead person to obtain a virtual picture, and the virtual picture is displayed in a field of view range of a user of a VR device worn by the user for the user to watch.

The image synthesis technology can be, but is not limited to, semantic control image synthesis based on depth learning, depth map coding based on viewpoint synthesis, a Direct Send parallel image synthesis method with minimum communication overhead, and the like.

Step S13: and according to the collected voice information from the user, updating the deceased person initial character model in the virtual picture once or more times to obtain the virtual picture corresponding to the deceased person final character model in the virtual customized living scene of the deceased.

In detail, since the user is most familiar with the deceased, the user can view the deceased initial character model in the virtual picture in the VR device and perfects the deceased initial character model in real time through voice, and specifically, the deceased initial character model in the virtual picture is updated once or more times according to one or more pieces of voice information collected in real time from the user until the character model in the virtual picture is viewed in a satisfied state, and the user stops speaking voice, so that the virtual picture corresponding to the final character model of the deceased in the virtual customized living scene of the deceased is obtained.

Optionally, the updating the deceased person model in the virtual picture one or more times according to the collected voice information from the user to obtain the virtual picture corresponding to the final person model of the deceased in the virtual customized living scene of the deceased includes:

respectively carrying out voice recognition on one or more pieces of collected voice information from a user to obtain voice recognition information corresponding to each piece of voice information; based on each voice recognition information, character model updating information corresponding to each voice recognition information is respectively obtained; updating an deceased initial character model in the virtual picture one or more times based on the character model update information; wherein the deceased initial character model includes: one or more initial character characteristic information;

specifically, voice recognition is carried out on collected voice information from a user to obtain voice recognition information corresponding to the voice information; specifically, collected voice information from a user is input into a voice recognition model, and voice recognition information corresponding to the voice information is obtained, wherein the voice recognition model is obtained by training a plurality of voice information in a training set and the corresponding voice recognition information. For example, the voice message is "I want to make the height of dad short by two centimeters"; the voice recognition information is 'height two centimeters'. Wherein, the speech recognition model specifically includes but is not limited to: the Bert model, NLP model, Transform model, etc. are well known to those skilled in the art and will not be described herein.

Obtaining character model update information corresponding to the voice recognition information based on the voice recognition information; specifically, based on the voice recognition information, character model update information corresponding to the voice recognition information is found in a voice information database; wherein the voice information base comprises: each piece of speech recognition information and character model update information corresponding to each piece of speech recognition information.

Updating initial character characteristic information of an departed initial character model in the virtual picture based on the character model updating information, so that the first updating is finished; taking the updated character model as the initial character model of the next update, and taking the updated initial character characteristic information as the initial character characteristic information of the next update to carry out the next update; similar to the last update step, the next step is performed until the user stops voice control.

Taking the one or more updated initial character models of the deceased in the virtual picture as final character models of the deceased, and generating a virtual picture corresponding to the final character models of the deceased in the virtual customized living scene of the deceased; wherein the final model of the deceased person comprises: one or more final character feature information. And the final character characteristic information is the initial character characteristic information of the deceased initial character model in the virtual picture after one or more updates.

Optionally, the updating of the deceased initial character model in the virtual picture based on the character model update information includes: updating each initial character feature information of the deceased initial character model in the virtual picture based on character feature update information corresponding to each initial character feature information in the character model update information. For example, if the speech recognition information is "two centimeters short," the character model update information includes: and (3) updating the character characteristic updating information (short two centimeters) corresponding to the initial character characteristic information (height characteristic), reducing the height of the height characteristic information of the departed initial character model by two centimeters, wherein the reduced height characteristic information is the updated height characteristic information.

Step S14: and controlling the dead person final character model in the virtual picture to execute one or more interactive actions according with the psychological needs of the user according to the user expression images acquired in real time.

In detail, when a user sees a virtual picture in VR equipment, some psychological fluctuation is avoided, and expression change is caused, so that expression images of the user are collected in real time, and the final character model of the dead in the virtual picture is controlled to execute one or more interactive actions through the expression change of the user, so that the psychological needs of the user are met, and the sense of reality is greatly increased.

Optionally, the manner for controlling the model of the final character of the deceased in the virtual picture to perform one or more interactive actions according to the psychological needs of the user according to the information of the expression images of the user collected in real time includes:

respectively identifying one or more user expression images acquired in real time to obtain user expression identification information corresponding to each user expression image; specifically, the collected user expression images are input into an expression recognition model, and user expression recognition information corresponding to the user expression images is obtained, wherein the voice recognition model is obtained by training a plurality of expression images and corresponding user expression recognition information in a training set, and it needs to be noted that the user expression images can be pictures or videos.

Acquiring action control information corresponding to each user expression image based on the user expression identification information; specifically, character model updating information corresponding to the user expression identification information is found in an expression information database based on the user expression identification information; wherein, the expression information base includes: the expression recognition information of each user and the action control information respectively corresponding to the expression recognition information of each user. And each action control information corresponds to one or more interactive actions respectively. The expression identification information of the user can comprise one or more of expression identification information such as smile, cry, sadness, excitement, anger identification information and the like; the interactive action corresponding to the action control information may include, but is not limited to: speaking, singing, feeling hearts, hugging, kissing, touching the head, shoulder clapping, walking near, dancing, running, waving hands and the like. For example, the interactive action corresponding to the action control information corresponding to the crying expression identification information is hugging, so that the user can be consoled.

Based on the motion control information, controlling an deceased final character model in the virtual picture to perform one or more interactive motions corresponding to the motion control information, the interactive motions meeting user psychological needs. Because the expression of the user reflects the emotion of the user, the final character model of the deceased executes the action preset corresponding to the expression, and the psychological needs of the user are greatly met. It should be noted that the incidence relation between the expression identification information of each user in the expression information base and the action control information corresponding to each interaction action may be established according to the habit of the deceased, so that the reality degree of the simulation is higher.

Optionally, the method further includes: and acquiring background music corresponding to the final character model of the deceased to perform one or more interactive actions, and playing the background music. Specifically, background music which is corresponding to the final character model of the deceased and executes one or more interactive actions is obtained from a music library and played; wherein the music library comprises: background music corresponding to each interactive action to increase immersion.

Similar to the principle of the embodiment, the invention provides an interactive system based on VR dead person simulation.

Specific embodiments are provided below in conjunction with the attached figures:

fig. 2 shows a schematic structural diagram of a system of an interaction method based on VR deceased simulation in the embodiment of the present invention.

Applied to VR equipment that the user wore, the system includes:

the virtual scene construction module 21 is configured to construct an departed saint virtual customized life scene corresponding to an departed saint real life scene based on a panoramic image of the departed saint real life scene;

the initial character model building module 22 is connected with the virtual scene building module 21 and used for building an departed saint initialized character model corresponding to the character image based on the character image of the departed saint and generating a virtual picture corresponding to the departed saint initial character model in the virtual customized living scene of the departed saint;

the voice updating character model module 23 is connected with the initial character model building module 22 and is used for updating the deceased initial character model in the virtual picture one or more times according to the collected voice information from the user so as to obtain a virtual picture corresponding to the final character model of the deceased in the virtual customized living scene of the deceased;

and the expression control interaction module 24 is connected with the voice updating character model module 23 and is used for controlling the final character model of the deceased in the virtual picture to execute one or more interaction actions according with the psychological needs of the user according to the user expression images acquired in real time.

It should be noted that the division of each module in the embodiment of the system in fig. 2 is only a division of a logical function, and all or part of the actual implementation may be integrated into one physical entity or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; part of the modules can be realized in a software calling mode through a processing element, and part of the modules can be realized in a hardware mode;

for example, the modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).

Therefore, since the implementation principle of the interaction system based on VR deceased simulation has been described in the foregoing embodiments, repeated description is omitted here.

Optionally, the system further includes: and the music playing module is connected with the expression control interaction module 24 and is used for acquiring background music corresponding to the final character model of the deceased to execute one or more interaction actions and playing the background music. Specifically, the music playing module acquires background music corresponding to the final character model of the deceased to execute one or more interactive actions from a music library and plays the background music; wherein the music library comprises: background music corresponding to each interactive action to increase immersion.

Fig. 3 shows a schematic structural diagram of an interaction terminal 30 based on VR expedition.

The interaction terminal 30 based on VR dead-end simulation comprises: memory 31 and processor 32 the memory 31 is for storing computer programs; the processor 32 runs a computer program to implement the interaction method based on VR deceased simulation as described in figure 1.

Optionally, the number of the memories 31 may be one or more, the number of the processors 32 may be one or more, and fig. 3 illustrates one example.

Optionally, the processor 32 in the VR inventor simulation based interaction terminal 30 loads one or more instructions corresponding to the progress of the application program into the memory 31 according to the steps shown in fig. 1, and the processor 32 executes the application program stored in the first memory 31, so as to realize various functions in the VR inventor simulation based interaction method shown in fig. 1.

Optionally, the memory 31 may include, but is not limited to, a high speed random access memory, a non-volatile memory. Such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices; the Processor 32 may include, but is not limited to, a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.

Optionally, the Processor 32 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.

The invention also provides a computer-readable storage medium, in which a computer program is stored, which computer program, when running, implements the interaction method based on VR deceased simulation as shown in fig. 1. The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disc-read only memories), magneto-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be a product that is not accessed by the computer device or may be a component that is used by an accessed computer device.

Fig. 4 shows a schematic structural diagram of a VR device in an embodiment of the present invention.

The VR device worn by the user may include portions of a VR headset, a control handle, and so on. Data interaction with VR devices may be accomplished via a network, including but not limited to a wide area network, a local area network, and so forth. The standard of the network includes, but is not limited to, LTE, CDMA, SCDMA, etc.

The VR device includes:

interaction terminal 41 based on VR deceased simulation as described in fig. 4; it should be noted that, since the implementation principle of the interaction terminal based on VR expedition has been described in the foregoing embodiment, repeated description is not provided herein.

A life scene image receiving device 42, a character image receiving device 43, a user voice collecting device 44 and a user expression image collecting device 45 which are respectively connected with the terminal;

the living scene image receiving device 42 may be any device or interface with an image receiving function, and is used for receiving a panoramic image of a real living scene of an deceased; the character image receiving device can be any device or interface with an image receiving function and is used for receiving the image of the character before the deceased; the user voice acquisition device is any device with a voice acquisition function and is used for acquiring voice information from a user; such as a microphone with a recording function. The user expression image acquisition device is any device with an image acquisition function and is used for acquiring user expression images in real time; such as a camera.

It should be noted that the division of each device and terminal in the system embodiment of fig. 4 is only a logical division, and all or part of the actual implementation may be integrated into one physical entity or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware.

Optionally, the VR device further includes: the music playing device is connected with the interaction terminal based on VR dead person simulation and is used for playing the back music; the music playing device is any device with a music function.

In summary, according to the interaction method, the interaction system, the interaction terminal and the VR device based on VR deceased person simulation, firstly, a deceased person virtual customized living scene and a deceased person initialization character model are constructed through a panoramic image of a deceased person real living scene and a character image before the deceased person, a user can directly adjust the deceased person initialization character model in the virtual image through voice, and then the character model is controlled according to the user expression image to execute one or more interaction actions according with the user psychological needs. The method can customize the life virtual scene of the deceased specially owned by the user according to personal conditions, can directly adjust the deceased character model through voice according to the virtual image presented on the VR equipment so as to obtain a more real image of the deceased character, and can automatically control the deceased character model to make an interactive action according with the psychological needs of the user according to the expression change of the user; the scheme not only can ensure that the virtual picture is more real, but also can meet the requirements of different users, enhances the interactivity and improves the immersion sense, and presents VR virtual reality images which can cure people best on the basis of the psychological needs of relatives of each departed user and the familiar scenes of the departed relatives, thereby reducing the psychological disorders or psychological diseases caused by loss of relatives, enabling the users to get rid of psychological trauma as early as possible and solving the problems in the prior art. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.

The foregoing embodiments are merely illustrative of the principles of the present invention and its efficacy, and are not to be construed as limiting the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于多轴飞行器在未知空间内三维扫描建模系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!