AR Chinese garment changing method

文档序号:1954832 发布日期:2021-12-10 浏览:14次 中文

阅读说明:本技术 Ar汉服换装方法 (AR Chinese garment changing method ) 是由 郑倩 徐柯妮 李虹 李清明 于 2021-09-27 设计创作,主要内容包括:本发明涉及一种AR汉服换装方法,包括:根据汉服图片或者实物建立虚拟汉服模型;侦测人体,追踪人体骨骼关节点与动作,收集人体骨骼数据并根据所述人体骨骼数据控制人体模型;测量体感设备和人体的相对坐标,并将所述相对坐标运用于所述人体模型;基于相对坐标,将虚拟汉服模型与人体骨骼关节点形成对应关系。上述AR汉服换装方法,可以检测用户在大屏前面并且跟踪识别,检测用户的手势选择衣服,并根据用户的体型调整衣服大小将衣服穿上,实现虚拟和现实结合的换装试衣体验,精确度高,对不同的身体部位辨认更清楚,对于手臂和腿部动作能够更好的识别,用户体验感更好。(The invention relates to an AR Chinese clothes changing method, which comprises the following steps: establishing a virtual Chinese clothes model according to the Chinese clothes picture or the real object; detecting a human body, tracking joints and actions of human bones, collecting human bone data and controlling a human body model according to the human bone data; measuring relative coordinates of the somatosensory equipment and the human body, and applying the relative coordinates to the human body model; and forming a corresponding relation between the virtual Chinese uniform model and human skeleton joint points based on the relative coordinates. Above-mentioned AR chinese clothing changing method can detect the user and in front and the tracking discernment of large-size screen, detects user's gesture and selects the clothes to adjust the clothes size according to user's size and wear the clothes, realize that the dress changing that virtual and reality combine is tried on clothes and is experienced, and the accuracy is high, and is clearer to the recognition that different health positions can be better to arm and shank action, and user experience feels better.)

1. An AR Chinese dress changing method, the method comprising:

establishing a virtual Chinese clothes model according to the Chinese clothes picture or the real object;

detecting a human body, tracking joints and actions of human bones, collecting human bone data and controlling a human body model according to the human bone data;

measuring relative coordinates of the somatosensory equipment and the human body, and applying the relative coordinates to the human body model;

and forming a corresponding relation between the virtual Chinese uniform model and the human skeleton joint points based on the relative coordinates.

2. The AR Han dress re-packing method according to claim 1, wherein after the virtual Han dress model is built according to Han dress pictures or real objects, the method comprises the following steps:

expanding the virtual Chinese uniform model into a plurality of small parts;

respectively pasting and coloring the small parts;

and distributing weight to the virtual Chinese clothing model skin, exporting an FBX file with skeleton information, importing UNITY, and adapting to UNITY skeleton.

3. The AR han dress changing method of claim 1, wherein said detecting human body, tracking human body skeleton nodes and actions, collecting human body skeleton data and controlling human body model according to said human body skeleton data comprises:

a camera of the used somatosensory equipment carries out bone tracking on images of one or two persons in a visual field, a plurality of bone joint points on a human body are tracked, and human body bone data are collected;

and controlling the human body model according to the human body skeleton data.

4. The AR han dress re-fitting method of claim 3, wherein said somatosensory device is KINECT 2.0.

5. The AR han dress re-fitting method of claim 4, wherein the method further comprises:

user data captured by the Kinect device, including Color Image data, Depth data and bone data from Color Image Steam, Depth ImageStream and Skeleton Stream, was acquired in real time in Unity3d using the KinectManger component.

6. The AR han dress re-fitting method of claim 5, wherein the method further comprises:

the Kinect management function is realized through 3 member functions in KinectManger and respectively comprises CreatConnect, Update and ProcessSkeleton, wherein the CreatConnect is used for detecting whether a computer is connected with Kinect equipment or not, and the Kinect equipment is set to be in a bone capturing state; the Update is used for updating each frame of picture and detecting whether the KinectManger class member variable changes; the ProcessSkeleton function is responsible for reading and smoothing the number of bones from a frame of bones.

Technical Field

The invention relates to an AR virtual dress changing technology, in particular to an AR Chinese clothes changing method.

Background

Apparel-related industries have reached $ 3 trillion worldwide, beginning with the four major elements of "clothing and housing" in human life. With the increasing strength of the country, the influence of the Chinese costume on the world is larger and larger. The Chinese costume embodies the whole Chinese costume culture and is an important carrier of the Chinese culture at the same time. The Chinese clothes have various styles, rich types, low price and inconvenient fitting. The Chinese clothes try-on problem is solved, so that consumers can efficiently and conveniently try on various Chinese clothes, the Chinese clothes are promoted, and in the experience process of Chinese clothes try-on, Chinese clothes lovers can be prompted to learn more about Chinese clothes and know about traditional culture knowledge.

In recent years, magic fitting mirrors are released abroad, the appearance of the magic fitting mirrors is similar to that of ordinary magic fitting mirrors in markets, and in fact, an intelligent chip is embedded into the mirror surface of the magic fitting mirrors, so that the magic fitting mirrors become a display screen capable of realizing human-computer interaction effect. The product utilizes Microsoft's Kinect body feeling technology, and the user can select the clothing style through the virtual button of air-insulated operation, has embodied the convenience of trying on. In China, the novel clothes-fitting mirror also has the similar fitting mirror.

However, the current fitting accuracy is not high enough, different body parts are not clearly identified, the actions of the arms and the legs cannot be better identified, and the user experience is not good.

Disclosure of Invention

In view of the above, it is necessary to provide an AR chinese clothing changing method with high fitting accuracy.

An AR chinese dress change method, the method comprising:

establishing a virtual Chinese clothes model according to the Chinese clothes picture or the real object;

detecting a human body, tracking joints and actions of human bones, collecting human bone data and controlling a human body model according to the human bone data;

measuring relative coordinates of the somatosensory equipment and the human body, and applying the relative coordinates to the human body model;

and forming a corresponding relation between the virtual Chinese uniform model and the human skeleton joint points based on the relative coordinates.

Further, after the virtual chinese clothing model is established according to the chinese clothing picture or the real object, the method includes:

expanding the virtual Chinese uniform model into a plurality of small parts;

respectively pasting and coloring the small parts;

and distributing weight to the virtual Chinese clothing model skin, exporting an FBX file with skeleton information, importing UNITY, and adapting to UNITY skeleton.

Further, the detecting human body, tracking human body skeleton nodes and actions, collecting human body skeleton data and controlling a human body model according to the human body skeleton data includes:

a camera of the used somatosensory equipment carries out bone tracking on images of one or two persons in a visual field, a plurality of bone joint points on a human body are tracked, and human body bone data are collected;

and controlling the human body model according to the human body skeleton data.

Further, the motion sensing device is KINECT 2.0.

Further, the method further comprises:

user data captured by the Kinect device, including Color Image data, Depth data and bone data from Color Image Steam, Depth ImageStream and Skeleton Stream, was acquired in real time in Unity3d using the KinectManger component.

Further, the method further comprises:

the Kinect management function is realized through 3 member functions in KinectManger and respectively comprises CreatConnect, Update and ProcessSkeleton, wherein the CreatConnect is used for detecting whether a computer is connected with Kinect equipment or not, and the Kinect equipment is set to be in a bone capturing state; the Update is used for updating each frame of picture and detecting whether the KinectManger class member variable changes; the ProcessSkeleton function is responsible for reading and smoothing the number of bones from a frame of bones.

Above-mentioned AR chinese clothing changing method can detect the user and in front and the tracking discernment of large-size screen, detects user's gesture and selects the clothes to adjust the clothes size according to user's size and wear the clothes, realize that the dress changing that virtual and reality combine is tried on clothes and is experienced, and the accuracy is high, and is clearer to the recognition that different health positions can be better to arm and shank action, and user experience feels better.

Drawings

FIG. 1 is a flowchart of an AR Han dress changing method according to an embodiment;

fig. 2 is a diagram of a virtual chinese clothing model.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

As shown in fig. 1, in one embodiment, an AR chinese clothing changing method includes:

and step S110, establishing a virtual Chinese uniform model according to the Chinese uniform picture or the real object. Firstly, modeling is carried out on an existing Chinese clothes picture or Chinese clothes real object, and the modeling is shown in figure 2. On the basis, the Chinese clothes which are modeled are expanded and divided into small parts to prepare for subsequent charting, and the former two links are mainly used in combination with MAYA (three-dimensional modeling and animation software under Autodesk) and 3D MAX (3D Studio Max, three-dimensional animation rendering and making software based on a PC system). And then, mapping the established model, and realizing the color, the material and the like of the clothes in a professional 3D drawing auxiliary tool. And finally, covering the model with weight, exporting an FBX file with skeleton information, importing UNITY, and adapting to UNITY skeleton to realize the function of tracking the clothing.

Step S120, detecting the human body, tracking joints and actions of the human skeleton, collecting the human skeleton data and controlling the human model according to the human skeleton data. The GRB camera of the motion sensing device KINECT2.0 is used for carrying out skeleton tracking on one or two persons in the visual field of the device, 20 skeleton joint points on a human body can be tracked, and captured human skeleton data is used for controlling a human body model.

And step S130, measuring the relative coordinates of the motion sensing device and the human body, and applying the relative coordinates to the human body model. The relative coordinates of the human body and KINECT2.0 are measured by KINECT2.0, and applied to the human model. The 3D images are detected through the 3D depth sensor, 3D actions of a user are captured in real time, and meanwhile, pictures of an RGB camera of KINECT2.0 are displayed.

And step S140, forming a corresponding relation between the virtual Chinese dress model and human skeleton joint points based on the relative coordinates. In order to realize human-computer interaction between a user and three-dimensional clothing, user information captured by a Kinect can be acquired in real time in the Unity3d, and the Unity3d and the Kinect are docked by using a Kinect for Windows SDK (Kinect software development kit). Before the somatosensory interaction is completed, the virtual clothing model imported into Unity3d needs to form a corresponding relationship with human body skeletal joint points captured by the Kinect.

User data captured by the Kinect device, including Color Image data, Depth data and bone data from Color Image Steam, Depth Image stream and SkeletonStream, is obtained in real time in Unity3d using the KinectManger component.

The management function of the Kinect is realized by 3 member functions in the kinectmanager, which are creat connection (create link), Update and processskexeton, wherein the creat connection (create link) is used for detecting whether the computer is connected with a Kinect device and setting the device in a bone capture state; update is used for updating each frame of picture and detecting whether a member variable of a KinectManger (script attribute) class changes; the ProcessSkeleton function is responsible for reading and smoothing the number of bones from a frame of bones.

According to the AR Chinese clothes changing method, the fact that the user is in front of the large screen and tracks and recognizes can be detected, the user gesture is detected to select clothes, the clothes are put on according to the size of the user body type adjustment clothes, virtual and real combined clothes changing and fitting experience is achieved, the gesture triggers photographing, and the two-dimensional code is generated and is downloaded by the user. The method can be applied to offline activities such as shopping malls, scenic spots, exhibitions, personnel training and the like. The software can be operated in a vertical all-in-one machine, a projection system, an LED display system and the like which are provided with Kinect somatosensory equipment.

The invention mainly realizes man-machine interaction based on AR fitting realized by UNITY3D at a PC end, fits clothes with a real person, adopts a KINECT body sensing camera used by KINECT2.0 equipment, has low delay and strong definition, closely captures human skeleton by skeleton tracking, uses the captured human skeleton data for controlling a character model, measures the relative coordinates of the human body and the KINECT through the KINECT, applies the coordinates to the character model, and simultaneously displays the picture of the RGB camera of KINECT. The accuracy is high, and it is clearer to the recognition of different body parts, to the discernment that arm and shank action can be better, user experience feels better. The delay is low, the bone catching capacity is strong, and the synchronous motion of people and clothes is realized without being detached.

The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

6页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种动画生成方法、装置、设备、存储介质及程序产品

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!