Intelligent guidance system applied to pull-up

文档序号:1880844 发布日期:2021-11-26 浏览:9次 中文

阅读说明:本技术 一种应用于引体向上的智能指导系统 (Intelligent guidance system applied to pull-up ) 是由 李昀 雷毅谈 杨俊燕 刘雄 蒋旭刚 魏晓 唐聃 孙云川 陈玉坪 于 2021-08-31 设计创作,主要内容包括:本发明涉及体能训练计数领域,具体为一种应用于引体向上的智能指导系统,包括动作捕捉模块、动作分析模块和动作指导模块。本方案通过捕捉用户进行引体向上时的实时动作数据后,再根据存储的标准动作数据进行分析,判断用户的动作是否标准,并在用户动作不标准时通过显示增强技术指导用户进行动作调整,以此来保证用户能够在训练时能够及时发现并更正自身动作上的错误,提高自身训练效果的同时,避免了因动作不标准而导致的身体损伤,保障了用户的身体健康安全。(The invention relates to the field of physical training counting, in particular to an intelligent guidance system applied to chin, which comprises a motion capture module, a motion analysis module and a motion guidance module. According to the scheme, after real-time action data of a user in the process of pull-up is captured, analysis is carried out according to the stored standard action data, whether the action of the user is standard is judged, and the user is guided to carry out action adjustment through a display enhancement technology when the action of the user is not standard, so that the user can timely find and correct errors in the action of the user in the training process, the training effect of the user is improved, meanwhile, physical damage caused by the fact that the action is not standard is avoided, and the body health and safety of the user are guaranteed.)

1. The utility model provides an intelligent guidance system for chin which characterized in that: the device comprises a storage module, a motion capture module, a motion analysis module and a motion guidance module;

the storage module is used for storing standard action data of the pull-up movement;

the motion capture module is used for capturing real-time motion data of a user during pull-up training;

the action analysis module is used for analyzing whether the user action is standard or not according to the standard action data and the real-time action data of the user;

the action guidance module is used for guiding the user to adjust the action through the reality augmentation technology when the action of the user is not standard.

2. The intelligent guidance system applied to pull-up according to claim 1, wherein: the motion capture module comprises an image acquisition module, a mark point capture module and a motion synthesis module;

the image acquisition module is used for acquiring a motion image of a user;

the mark point capturing module is used for capturing joint point data when a user moves;

the action synthesis module is used for establishing a user action model according to the joint point data, and the user action model is used for generating real-time action data.

3. The intelligent guidance system applied to pull-up according to claim 2, wherein: the standard motion data includes motion type, gain location, and joint point data.

4. The intelligent guidance system applied to pull-up according to claim 3, wherein: the action analysis module comprises an action recognition module and an action comparison module;

the action recognition module is used for analyzing the action type of the user according to the real-time action data;

the action comparison module is used for analyzing whether the user action is standard or not according to the standard action data.

5. The intelligent guidance system applied to pull-up according to claim 4, wherein: the action guidance module comprises an action correction module, a reality augmentation module and AR glasses;

the action correcting module is used for generating a guide image according to the real-time action data of the user when the action of the user is not standard;

the reality augmentation module is used for projecting the guide image on the retina of the user through AR glasses.

6. The intelligent guidance system applied to pull-up according to claim 5, wherein: the system also comprises an information input module and a face recognition module;

the information input module is used for inputting facial features and identity information of a user;

the face recognition module comprises a feature capture module and an identity recognition module; the feature capture module is to capture facial feature data of a user; the identity recognition module is used for recognizing the identity information of the user according to the collected facial feature data.

7. The intelligent guidance system applied to pull-up according to claim 6, wherein: the attendance system also comprises a performance attendance module, wherein the performance attendance module comprises a counting module and an attendance module;

the counting module is used for recording the number of movements of a user during the pull-up training;

the attendance checking module is used for recording the monthly training times of the user and the exercise number of each training.

Technical Field

The invention relates to the field of physical training counting, in particular to an intelligent guidance system applied to pull-up.

Background

In the production process of mines, a plurality of potential unsafe factors which can cause casualty accidents of mines exist, the mine exploitation has certain dangerousness due to the unsafe factors, and when the casualty accidents of the mines occur, mine rescue teams usually participate in mine rescue. However, mine rescue workers must work in a complex, narrow and closed environment consisting of high temperature, damp heat, toxic gas and explosive gas, and the requirements on physical performance are extremely strict, so that the mine rescue workers need to perform a great deal of physical performance training in daily life.

The pull-up is used as a training item with excellent effect of exercising the arm and shoulder and back muscles of a human body, and is also brought into daily physical training by the existing mine rescue team, but the existing pull-up equipment generally only has the function of simple counting, so that timely correction is difficult to be made when the training action of a user is not standard, the body of a team member is often injured by wrong training action, and irretrievable damage is caused to the body of the team member by long-term wrong action.

Disclosure of Invention

The technical problem to be solved by the invention is to provide a guidance system which can guide a user to perform pull-up training by standard actions.

The basic scheme provided by the invention is as follows: an intelligent guidance system applied to chin comprises a storage module, a motion capture module, a motion analysis module and a motion guidance module;

the storage module is used for storing standard action data of the pull-up movement;

the motion capture module is used for capturing real-time motion data of a user during pull-up training;

the action analysis module is used for analyzing whether the user action is standard or not according to the standard action data and the real-time action data of the user;

the action guidance module is used for guiding the user to adjust the action through the reality augmentation technology when the action of the user is not standard.

The principle and the advantages of the invention are as follows: according to the scheme, after real-time action data of a user in a pull-up process is captured, analysis is performed according to the stored standard action data, whether the action of the user is standard is judged, and the user is guided to perform action adjustment through a display enhancement technology when the action of the user is not standard, so that the user can timely find and correct errors in the action of the user during training, the training effect of the user is improved, meanwhile, physical damage caused by the nonstandard action is avoided, and the physical health and safety of the user are guaranteed.

Further, the motion capture module comprises an image acquisition module, a mark point capture module and a motion synthesis module;

the image acquisition module is used for acquiring a motion image of a user;

the mark point capturing module is used for capturing joint point data when a user moves;

the action synthesis module is used for establishing a user action model according to the joint point data, and the user action model is used for generating real-time action data.

Has the advantages that: the real-time motion of the user is analyzed by capturing joint point data when the user moves, and whether the motion of the user is standard or not can be judged more accurately compared with the existing motion contour analysis method.

Further, the standard motion data includes motion type, gain region, and joint point data.

Has the advantages that: the training action is associated with the gained part, so that the user can carry out targeted training on the specific part according to the requirement of the user.

Further, the action analysis module comprises an action recognition module and an action comparison module;

the action recognition module is used for analyzing the action type of the user according to the real-time action data;

the action comparison module is used for analyzing whether the user action is standard or not according to the standard action data.

Has the advantages that: whether the user action is standard or not is analyzed through the standard action data, and the user action can be timely and accurately corrected.

Further, the action guidance module comprises an action correction module, a reality augmentation module and AR glasses;

the action correcting module is used for generating a guide image according to the real-time action data of the user when the action of the user is not standard;

the reality augmentation module is used for projecting the guide image on the retina of the user through AR glasses.

Has the advantages that: the user is guided through the reality augmentation technology, and the guide image is directly projected on the retina of the user, so that the user can accurately adjust the action under the condition of not interrupting the training.

Further, the system also comprises an information input module and a face recognition module;

the information input module is used for inputting facial features and identity information of a user;

the face recognition module comprises a feature capture module and an identity recognition module; the feature capture module is to capture facial feature data of a user; the identity recognition module is used for recognizing the identity information of the user according to the collected facial feature data.

Has the advantages that: the user information is recognized through the face recognition technology, and the identity of the training personnel can be accurately judged.

The attendance system further comprises a performance attendance module, wherein the performance attendance module comprises a counting module and an attendance module;

the counting module is used for recording the number of movements of a user during the pull-up training;

the attendance checking module is used for recording the monthly training times of the user and the exercise number of each training.

Has the advantages that: the monthly training amount of the user is recorded by counting the training times and the exercise number, so that the physical ability of the user is favorably judged whether to be reduced.

Drawings

Fig. 1 is a schematic diagram of a first embodiment of an intelligent guidance system applied to pull-up according to the present invention.

Detailed Description

The following is further detailed by way of specific embodiments:

the specific implementation process is as follows:

example one

The embodiment one is basically as shown in the attached figure 1, and the intelligent guidance system applied to the pull-up comprises a storage module, a performance attendance module, an information entry module, a face recognition module, a motion capture module, a motion analysis module and a motion guidance module, wherein the storage module stores standard motion data and user information including facial features and identity information, and the standard motion data comprises motion types, gain parts and joint point data.

The information input module is used for acquiring facial feature data and identity information data of a user and transmitting the acquired data to the storage module for storage.

The motion capture module comprises an image acquisition module, a mark point capture module and a motion synthesis module. The image acquisition module acquires user motion images through the three cameras and respectively acquires image data of the left side, the right side and the front side of a user. The mark point capturing module is used for capturing the motion tracks of the joint points of the user in the motion image and generating the joint point data, and the captured joint points mainly comprise wrists, elbow joints, shoulder joints, knee joints and spines. The motion synthesis module is used for establishing a motion model of the user according to the captured joint point data, in the embodiment, an articulated virtual human body motion model based on a Petri network is adopted, and real-time motion data of the user is generated according to the motion model.

The action analysis module comprises an action recognition module and an action comparison module, the action recognition module is used for analyzing the action type of the user with the pull-up body according to the real-time action data of the user, and the action type in the embodiment comprises a forward pull body and a backward pull body. The action comparison module is used for comparing and analyzing whether the action of the user is standard or not according to the standard action data, and when the action of the user is not standard, the action guide module guides the user to carry out action adjustment.

Specifically, the action guidance module comprises an action correction module, a reality augmentation module and AR glasses, the action correction module is used for generating a guidance image according to real-time action data of a user, the guidance image comprises a standard action data image and a gain description, the gain description is used for displaying a gain part of each joint point of the user during movement, if a pull body is upward, the user bends a knee joint to enable the body to lean backwards, the corresponding gain part is a back muscle, and the user can carry out targeted training on a specific part by informing the user of the gain part corresponding to the action of the user. The reality augmentation module is used for projecting the guide image on the retina of the user through AR glasses, and directly projects the image on the retina of the user through the reality augmentation technology, so that the image and the body of the user can be overlapped, the user can compare the guide image with the image in real time while adjusting the body action, and the accuracy of the guide action is greatly improved.

The face recognition module and the identity recognition module in the embodiment adopt the existing face recognition technology and are used for recognizing the identity of the user in the collected moving image according to the facial feature data of the user.

The performance assessment module comprises a counting module and an attendance module, the counting module is used for recording the number of movements of the user during the pull-up training, the attendance module is used for recording the monthly training times of the user and the number of movements of each training, the monthly training amount of the user is recorded by counting the training times and the movement numbers, and whether the physical ability of the user is reduced or not is judged.

The foregoing are merely exemplary embodiments of the present invention, and no attempt is made to show structural details of the invention in more detail than is necessary for the fundamental understanding of the art, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice with the teachings of the invention. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

6页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种舰员体适能训练与测评系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!