Personalized motion recommendation method based on gradient strategy decision algorithm

文档序号:1800863 发布日期:2021-11-05 浏览:13次 中文

阅读说明:本技术 一种基于梯度策略决策算法的个性化运动推荐方法 (Personalized motion recommendation method based on gradient strategy decision algorithm ) 是由 杨良怀 翁伟宁 于 2021-08-11 设计创作,主要内容包括:本发明旨在提供一种基于用户运动习惯的个性化动态运动推荐算法,该方法能有效挖掘用户个性化运动表征,在运动推荐中实现个性化运动的快速适配和动态调整。具体步骤如下:用户运动习惯表征,包括时间习惯表征,强度习惯表征、运动总体表征;构建运动习惯适配模型,主要由自训练运动配置决策体和快速推荐算法组成,自训练运动配置决策体根据少样本标签自训练,推荐算法根据决策体进行运动习惯、强度的个性化适配;实时调整决策目标,评价决策个性化运动推荐和用户实际运动量的适配情况,以运动实际完成量为优化目标实时更新决策结果;决策模型自适应机制,挖掘用户运动习惯的动态变化自适应调整决策体参数,以实现决策体对用户习惯的实时更新。(The invention aims to provide a personalized dynamic motion recommendation algorithm based on the motion habits of a user, which can effectively mine the personalized motion representation of the user and realize the rapid adaptation and dynamic adjustment of personalized motion in motion recommendation. The method comprises the following specific steps that the user movement habit characterization comprises a time habit characterization, an intensity habit characterization and a movement overall characterization; constructing a motion habit adaptation model, which mainly comprises a self-training motion configuration decision body and a quick recommendation algorithm, wherein the self-training motion configuration decision body is self-trained according to the few-sample label, and the recommendation algorithm carries out personalized adaptation on motion habits and intensity according to the decision body; adjusting a decision target in real time, evaluating the adaptation condition of decision-making personalized motion recommendation and the actual motion quantity of a user, and updating a decision result in real time by taking the actual motion completion quantity as an optimization target; and the decision model self-adaptive mechanism is used for mining the dynamic change of the motion habits of the user to self-adaptively adjust the parameters of the decision body so as to realize the real-time update of the user habits by the decision body.)

1. A personalized motion recommendation method based on a gradient strategy algorithm comprises the following steps:

(1) collecting the representation of the exercise habits of the user;

(2) a motion habit adaptation model based on a reinforced gradient strategy algorithm is constructed and mainly comprises a self-training motion configuration decision body and a quick recommendation algorithm, wherein the self-training motion configuration decision body is self-trained according to few-sample labels, and the recommendation algorithm carries out personalized adaptation on motion habits and strength according to the decision body.

(3) And adjusting the decision target in real time, evaluating the adaptation condition of the decision personalized motion recommendation and the actual motion amount of the user, and updating the decision result in real time by taking the actual motion completion amount as an optimization target.

(4) And the decision model self-adaptive mechanism is used for mining the dynamic change of the motion habits of the user to self-adaptively adjust the parameters of the decision body so as to realize the real-time update of the user habits by the decision body.

2. The personalized motion recommendation method based on gradient strategy algorithm according to claim 1, characterized in that: the representation of the exercise habits in the step 1) specifically comprises the following steps:

(1.1) time habit characterization;

the time habit is expressed in the form of vector tracks, and comprises individual movement time and movement time period. The vector tracks are divided into time periods according to hours, and then the time periods form track sequences according to the time sequence ThereinRepresenting the average exercise time of the individual over the hour period, is calculated by equation (1):

wherein m is the individual habit, the number of weeks is calculated, and the m-week exercise data is used as the individual exercise time habit. x is the number ofi,nThe exercise duration in the nth hour of the ith week. The mean duration of the m-cycle motion data is characterized by taking serialization as time habits, and the motion time habit similarity can be measured by a track.

(1.2) strength habit characterization;

the intensity habit characterization is similar to the time habit characterization, and the intensity habit characterization and the time habit characterization form a track sequence according to the time sequenceThereinRepresents the average exercise intensity of the individual over the hour period, as calculated by equation (1):

wherein m is the individual habit, the number of weeks is calculated, and the m-week exercise data is used as the individual exercise intensity habit. x is the number ofi,nThe average exercise heart rate is directly hooked with the intensity within the nth hour of the ith week, and the higher the heart rate is, the stronger the exercise load of the individual is represented, and the exercise intensity of the individual can be directly considered to be reflected. The mean duration of the m-cycle motion data is characterized by taking serialization as a time habit, and the trajectory can measure the similarity of the motion intensity habit.

(1.3) general characterization of motion

The motion habit is represented by the motion total amount, the motion total amount is represented by a vector track sequence in a limited time distribution, the motion total amount is calculated by the product of the motion intensity and the motion time, and the motion total amount calculation formula is as follows:

wherein T is1:nAnd characterizing a vector for the total amount of the individual habit movement, wherein the vector represents the distribution condition of the individual movement amount within a limited time, and the habit distribution condition is organized in a track form and serves as a reference track for fine-grained recommendation and a decision body learning standard to realize movement distribution adaptation and recommendation.

3. The personalized motion recommendation method based on gradient strategy algorithm of claim 1, wherein: the step (2) of constructing the motion habit adaptation model based on the reinforced gradient strategy algorithm specifically comprises the following steps:

(2.1) the motion habit adaptation model overall architecture is mainly divided into two models: a self-training motion configuration model and a personalized recommendation model.

And (2.2) building a strengthened decision machine based on a gradient strategy, wherein the strengthened decision machine learns the individual motion habit and imitates the decision idea of the individual in motion distribution to finish rapid motion adaptation.

And (2.3) a personalized motion recommendation model, wherein the core of the personalized motion recommendation model is a gradient strategy reinforcement decision machine, the input is the total amount of one-cycle training of the individual, the Markov decision process is used for carrying out self-sampling training of the decision machine and outputting a cycle motion recommendation table, and whether each time period moves or not, the movement intensity and the movement time are specified.

4. The personalized motion recommendation method based on gradient strategy algorithm of claim 3, wherein: the self-training exercise configuration model in the step (2.1) is a decision center, which mainly learns detailed exercise decision characteristics from individual exercise total amount habits and realizes individual exercise recommendation in different exercise requirements and different time periods. The model simulates individual thinking modes aiming at movement and simulates the decision process of the individual, and a self-sampling and self-training recommendation decision model is formed. The mode reduces the model requirement input and realizes the quick adaptation with the individual motion habit; and the personalized recommendation model completes the generation of the recommendation motion in a track distribution mode on the basis of the decision model. And the recommendation model implements decision making for each time point and implements subsequent decision making by changing motion distribution state in the decision making. The decision track is the recommended exercise prescription based on the exercise habit.

5. The personalized motion recommendation method based on gradient strategy algorithm of claim 3, wherein: in the step (2.1), the motion habit adaptive model kernel is a decision machine based on a gradient strategy, the decision machine analyzes each fine-grained state and determines the motion arrangement condition of each fine-grained state, and the decision machine learns the motion arrangement mode aiming at different time and motion quantity demand and consists of two parts, namely decision machine construction and sampling training. And the personalized recommendation model forms a motion adaptive trajectory vector aiming at the pre-trained decision machine, and the decision machine determines the distributed motion amount in each time point and completes the overall trajectory configuration to form a complete habit-based motion recommendation scheme.

6. The personalized motion recommendation method based on gradient strategy algorithm of claim 3, wherein: in the step (2.2), the decision-making machine is strengthened to define a motion state and the motion state is used as input, the motion state is a time interval position and a motion residual task amount in a motion track, the sequence is used for representing, and the current time interval is position information of decision time in the whole track sequence and is respectively marked as 0-n; filling the motion residual task quantity with numerical values, representing the total number of distributable motion quantities in decision time, namely inputting a 1 x 2 state sequence into a decision machine, wherein the sequence comprises sampling information of a human brain in the decision motion process, the decision machine state sequence respectively adopts two same feedforward networks to complete the selection of motion time and motion intensity, the network comprises four layers, and the first layer is an input sequence; the second layer is an up-sampling full-connection layer containing 8 neurons, and the relu is used as an activation function to improve the nonlinear learning ability; the third layer is an up-sampling full-connection layer containing 16 neurons, and relu is also adopted as an activation function to improve the generalization learning capability of the model. And the fourth layer is an output layer which comprises 9 neurons and represents the probability of respectively representing and selecting the exercise time and the exercise intensity by using a softmax activation function, in the exercise time selection, each output neuron represents 8 exercise times from 20 minutes to 60 minutes at intervals of 5 minutes, and simultaneously comprises exercise zero values without the exercise time, the exercise intensity represents heart rate, the heart rate range from the exercise starting value 120 to the exercise early warning heart rate 160 of healthy people is divided into 8 heart rates and a zero value without the exercise allocation at minimum intervals of 5 as exercise intensity representations, and the decision center learns individual exercise habits and allocation thoughts on the basis of the residual exercise amount and week time periods to realize the adaptive exercise decision.

7. According to the claimsSolving 3 the personalized motion recommendation method based on the gradient strategy algorithm is characterized in that: the markov decision process in step (2.3) causes the decision machine to move in time order on the sample sequence, the process sequence being S ═ S1,a1,s2,a2,…,sn,anIn which s isiRepresenting the state of the ith time node, i.e., the time node position and the total amount of motion remaining. And a isiIn order to generate a motion distribution strategy according to state information by a decision machine, state information can be changed in each strategy distribution to influence subsequent decisions, motion distribution quantity of each time point can be generated by n state execution sequence decisions, different from a Markov decision process, reward in the decision process is replaced by sampling distribution motion and individual habit motion errors, mean square error of a distributed sampling sequence and an individual habit sequence can be calculated to calculate training error of the decision machine, and if the distribution of the total work amount of the sampling sequence is not completed, all motion quantity distribution capacity of the decision machine is improved when the error is penalized, and error calculation is defined by the following formula:

the loss is a loss function of parameter training, T represents an individual real habit sequence, P represents a sampling sequence, G is input total amount of weekly movement, the first half part of the loss function restrains the similarity of a distribution mode and individual habits, when penalty is added to the second half part, a decision-making machine is promoted to set up target task amount of an individual to be completely distributed, the process is that the central decision-making machine is repeatedly trained in a self-sampling mode, the number of times of sampling training is set to be 500, the decision-making central decision-making machine learns the individual movement habits and movement distribution modes in 500 times of self-sampling, and weekly target short-term movement plan recommendation based on the movement habits is completed.

8. The personalized motion recommendation method based on gradient strategy algorithm of claim 1, wherein: and (3) implementing real-time dynamic target adjustment, wherein the specific adjustment steps are as follows:

the fixed individual exercise recommendation adaptation table cannot adapt to the problem of low coincidence degree of the prescription target plan and the actual exercise caused by weather, field and physiological body reasons existing in exercise adaptation in real time, aiming at the problem, exercise target real-time adjustment based on interactive feedback carries out residual plan adjustment according to the real exercise target completion amount, plan sequence reconstruction is carried out, the actual exercise condition is recorded for exercise habit change, firstly, actual exercise tracking is carried out according to the time sequence, the individual actual exercise feedback is waited at an exercise recommendation time node, and the additional exercise amount supplement of the individual is received at an unremitted exercise time node. When the motion amount change time node exists, the decision center reconstructs the decision state, takes the total amount of new residual motion as the node state and performs sequence sampling according to individual habits again, and the new sequence replaces the motion recommendation sequence after the motion amount change, so that real-time recommendation and updating can be realized according to the change of the actual motion condition.

9. The personalized motion recommendation method based on gradient strategy algorithm of claim 1, wherein: the motion decision model adaptive mechanism in the step (4) specifically comprises:

the exercise habit dynamic updating realizes the dynamic adjustment of the habit record and the decision machine parameter in units of weeks from long-term habits and short-term habits, and the exercise habit updating needs to be used as an individual new habit according to the week actual completion amount of the individual in exercise recommendation and update the habit sequence. Taking actual exercise situation as new habit update easily takes accidental environmental factors as habit consideration, and the habit update state of an individual cannot be completely represented, so that weighting update needs to be performed on the basis of keeping partial long-term habits, and partial short-term habits are learned, and the calculation is as follows:

T1:n=0.9T1:n+0.1R1:n (5)

wherein R is1:nUpdating part of short-term habits and exercise habit tables on the basis of keeping most of long-term habits for an individual actual exercise amount sequence by respectively setting updating weights to be 0.9 and 0.1And the characteristic updating is used for adjusting parameters of the decision machine through repeated sampling so as to realize dynamic updating of the decision machine based on habit change.

10. The personalized motion recommendation method based on gradient strategy algorithm according to any one of claims 2-9, characterized in that: n is an integer multiple of 168.

Technical Field

The invention relates to a gradient strategy algorithm, a feedforward network classifier and a Markov decision process in the field of reinforcement learning and machine learning, in particular to a self-training and self-adaptive dynamic personalized motion decision algorithm.

Background

Exercise is an important means to improve the basic activity and physical health of the human body. The human body has physiological difference aiming at the load quantity of different exercises, and different human bodies have different load conditions for different exercises. Excessive or improper exercise can easily cause physiological damage to the exercising population and even sudden exercise death. Among them, running exercise is the most common exercise in daily life, which is the exercise program causing the most cases of cardiac arrest and sudden death of an exerciser. Therefore, it becomes important to realize efficient, healthy and exercise-physiological-habit-conforming running exercise for the health risks and the improper exercise types that may exist in the running exercise.

In order to solve the exercise risk problem presented above and to achieve a healthy exercise, it is starting to become more and more important to adapt the universal exercise to the form of personalized exercise. The universal exercise provides the same exercise guidance for the sports crowd with different physical conditions and different exercise abilities by establishing the universal exercise standard or guidance. The method ignores the internal difference existing among the motion crowds, and is easy to cause motion risks. Personalized sports abandon the universal guidance mode of sports, and replace the designated universal sports guidance scheme with a 'one-person-one-case'. Specifically, the exercise guidance is provided in the "compendium for planning in health china 2030" published by the state department in 2016, and requires that safety evaluation before exercise and exercise capacity test evaluation are performed first, and then different exercise prescriptions are provided for the difference exercise capacities of the exercise population to reduce the risk of exercise. The application of personalized exercise prescription recommendation realizes the combination of physical and medical services, meets the growing scientific guidance demand of the public on exercise and fitness, drives the exercise guidance rationality by the exercise physiological difference, and realizes the guidance mode driven by the personalized data of wearable equipment and sports people.

Personalized exercise prescription recommendation is a current research hotspot, and most of the current researches aiming at the personalized exercise prescription are exercise mode recommendation, exercise time recommendation and exercise intensity recommendation which are acquired by physiological information. The recommendation method adapts to physiological information of a user, analyzes the exercise capacity of the exercise crowd according to physiological signals acquired by wearable equipment, and provides exercise prescription recommendation combined in a long term and a short term aiming at different targets, such as exercise capacity improvement or exercise capacity maintenance, the long term exercise prescription recommendation plans a long term target, and the long term prescription is distributed to a specific implementation mode in a short term. And personalized exercise prescription recommendation is mainly based on exercise capacity planning, and fine-grained prescription distribution based on exercise habits of the sports population is omitted. Modern sports physiology studies show that human physical strength and ability to target sports are influenced by the biological clock of the body, i.e., the optimal sports ability and sports state of a user are closely related to their daily exercise habits. The research on the adaptation between the personalized exercise and the exercise habits still remains blank, the research on the aspect is beneficial to improving the precision of the exercise prescription recommendation, realizing the fine-grained exercise recommendation distributed based on the short-time exercise habits, improving the personalized degree of the exercise prescription and further reducing the exercise risk.

In the personalized recommendation algorithm, the decision algorithm based on reinforcement learning is widely applied. Decision algorithms take a decision body as a decision brain, mimic the thought pattern of a population and make decisions or recommendations according to that pattern. In early studies of sports personalized recommendation, the recommendation mode of the early studies is rigid, and habits of sports groups cannot be dynamically changed and adaptively updated, so that prescription recommendation models lag behind habits and updates of the sports groups. The self-adaptive algorithm of the intelligent decision-making body is introduced to realize self-sampling learning and quick iterative updating of the decision, and the method has great application value and research value in the recommendation of adaptation and distribution problems.

Disclosure of Invention

In order to solve the problems of neglecting individual exercise habits and poor fine-grained personalized adaptation capability in a personalized exercise prescription, the invention provides a habit personalized exercise adaptation method based on a gradient strategy decision algorithm, and habit recommendation of exercise time intervals and exercise total amount is realized.

Individual exercise habits are embodied in exercise time and exercise intensity in units of weeks. Such as the exercise time period and each exercise time, the total amount of exercise of the individual in one week, and the exercise habit is represented by the time track. The exercise prescription recommendation also takes week as a unit, and the exercise distribution is recommended by a method adapting to exercise habits. The decision body inputs the residual motion amount and the current time period and realizes self-training and motion decision. Compared with other personalized exercise prescriptions, the method pays attention to fine-grained habit adaptation and realizes individual habit-driven exercise time and exercise intensity distribution. And the decision body executes track self-sampling and self-training to realize rapid adaptation of few samples, and completes real-time exercise habit learning and adjustment in a track updating mode. The decision-making body learns the motion mode similar to the individual habit through the neural network, learns the motion decision mode in specific time, and simulates the individual motion thought to realize fine-grained decision. Therefore, the invention is mainly divided into the following four steps: 1. representing a motion habit track; 2. and constructing a strengthened decision algorithm based on a gradient strategy, and completing habit-based fine-grained motion time interval and strength recommendation. 3. Real-time adjustment of the actual motion completion amount is performed. 4. And (4) evaluating and adjusting the decision based on the change of the exercise habits.

In order to solve the problem related to the invention, the personalized motion recommendation method based on the gradient strategy decision algorithm adopts the following technical scheme:

1) representing a motion habit track;

1.1) representing time habits;

the motion time habit in the invention is expressed in the form of vector tracks, including individual motion time and motion time period. The vector tracks are divided into time periods according to hours, and then the time periods form a track sequence with the length of n hours according to the time sequenceThereinRepresenting the average exercise time of the individual over the hour period, is calculated by equation (1):

wherein m is the individual habit, the number of weeks is calculated, and the m-week exercise data is used as the individual exercise time habit. x is the number ofi,nThe exercise duration in the nth hour of the ith week. Number of m cyclesAccording to the mean value duration, serialization is used as the representation of the time habit, and the trajectory can measure the similarity of the motion time habit.

1.2) strength habit characterization;

the intensity habit characterization is similar to the time habit characterization, and forms a track sequence with the length of n hours according to the time sequenceThereinRepresents the average exercise intensity of the individual over the hour period, as calculated by equation (1):

wherein m is the individual habit, the number of weeks is calculated, and the m-week exercise data is used as the individual exercise intensity habit. x is the number ofi,nThe average exercise heart rate is directly hooked with the intensity within the nth hour of the ith week, and the higher the heart rate is, the stronger the exercise load of the individual is represented, and the exercise intensity of the individual can be directly considered to be reflected. The mean duration of the m-cycle motion data is characterized by taking serialization as a time habit, and the trajectory can measure the similarity of the motion intensity habit.

1.3) general characterization of motion

The motion habit is represented by the motion total amount, the motion total amount is distributed in the time taking the week as a unit and is represented by a vector track sequence, the motion total amount is calculated by the product of the motion intensity and the motion time, and the motion total amount calculation formula is as follows:

wherein T is1:nCharacterizing vectors for the total amount of individual habit movement, wherein the vectors represent the distribution condition of the individual movement within one week, and the habit distribution condition is organized in a track form and is used as a reference track recommended by fine granularity and a decision-making body to learnAnd the standard realizes the motion distribution adaptation and recommendation.

2) Constructing a motion habit adaptation model based on an enhanced gradient strategy algorithm;

2.1) motion habit adaptation model overall architecture;

the motion habit adaptation model is mainly divided into two models: a self-training reinforced decision machine and a personalized recommendation model.

The self-training exercise configuration model is a decision center, which mainly learns detailed exercise decision characteristics from individual exercise total amount habits and realizes individual exercise recommendation in different exercise requirements and different time periods. The model simulates individual thinking modes aiming at movement and simulates the decision process of the individual, and a self-sampling and self-training recommendation decision model is formed. This mode reduces model requirement input and enables fast adaptation to individual motor habits.

And the personalized recommendation model completes the generation of the recommendation motion in a track distribution mode on the basis of the decision model. And the recommendation model implements decision making for each time point and implements subsequent decision making by changing motion distribution state in the decision making. The decision track is the recommended exercise prescription based on the exercise habit.

Generally speaking, the model kernel is a decision machine based on a gradient strategy, and the decision machine analyzes each fine-grained state and determines the movement arrangement condition of the state. The decision machine learning motion arrangement mode aiming at different time and required motion quantity comprises two parts of decision machine construction and sampling training. And the personalized recommendation model forms a motion adaptive trajectory vector aiming at the pre-trained decision machine, and the decision machine determines the distributed motion amount in each time point and completes the overall trajectory configuration to form a complete habit-based motion recommendation scheme.

2.2) building a strengthened decision machine based on a gradient strategy;

the reinforced decision machine learns the individual exercise habits and simulates the decision idea of the individual in the exercise distribution to finish the rapid exercise adaptation. The enhanced decision machine defines a motion state and takes the motion state as an input. The motion state can be understood as a period position in the motion trail and the motion residual task amount, and is characterized by using a sequence. The current time interval is the position information of the decision time in the whole track sequence and is respectively marked as 0-n; the motion residual task amount is filled in with a numerical value, which represents the total number of assignable motion amounts in a decision time. Namely, the decision machine inputs a 1 × 2 state sequence, and the sequence contains sampling information of the human brain in the decision motion process. And the decision machine state sequence respectively adopts two same feedforward networks to complete the selection of the motion time and the motion intensity. The network comprises four layers, wherein the first layer is an input sequence; the second layer is an up-sampling full-connection layer containing 8 neurons, and the relu is used as an activation function to improve the nonlinear learning ability; the third layer is an up-sampling full-connection layer containing 16 neurons, and relu is also adopted as an activation function to improve the generalization learning capability of the model. The fourth layer is the output layer, contains 9 neurons and uses the softmax activation function to characterize the probability of selecting the time and intensity of the movement, respectively. In the exercise time selection, the recommendation of average 60 minutes per day MVPA (moderate to severe exercise) is divided in the exercise guideline determined by the world health organization, so that each output neuron in the exercise time is characterized by 8 exercise times from 20 minutes to 60 minutes at 5-minute intervals, and contains exercise zero values without assigned exercise time. The exercise intensity is characterized by a heart rate, and the heart rate range from the exercise starting value 120 to the exercise early warning heart rate 160 of the healthy people is divided into 8 heart rates and a zero value without exercise allocation at intervals of 5 as the minimum interval, and the heart rates and the zero value are used as the exercise intensity. The decision center learns individual exercise habits and distribution thoughts on the basis of residual exercise amount and week time period, and self-adaptive exercise decision is realized.

2.3) personalized sports recommendation model

The core of the personalized motion recommendation model is a gradient strategy strengthening decision machine, the total amount of one-cycle training of an individual is input, the decision machine self-sampling training is carried out by a Markov decision process, a cycle motion recommendation table is output, and whether each time period moves or not, the movement intensity and the movement time are specified. The Markov decision process causes the decision machine to move in time sequence on the sampling sequence, the process sequence is S ═ { S }1,a1,s2,a2,…,sn,anIn which s isiRepresenting the state of the ith time node, i.e., the time node position and the total amount of motion remaining. And a isiAnd distributing a strategy for the motion generated by the decision machine according to the state information. Each policy assignment may change state information, affecting subsequent decisions, and execution of sequential decisions from the n states may generate a motion assignment for each time point. Different from the Markov decision process, the reward in the decision process is replaced by sampling distribution motion and individual habit motion errors, the mean square error of the distributed sampling sequence and the individual habit sequence can be calculated to calculate the training error of the decision machine, and if the sampling sequence does not finish the distribution of the total amount of work, the error is penalized to improve the ability of the decision machine to complete the distribution of all motion amounts, and the error calculation is defined by the following formula:

wherein loss is a loss function of parameter training, T represents an individual real habit sequence, P represents a sampling sequence, and G is the total input cycle movement. And the first half part of the loss function restricts the similarity of the distribution mode and the individual habit, and when the penalty is increased in the second half part, the decision machine is promoted to completely distribute the set target task amount of the individual. In the process, a central decision machine is repeatedly trained in a self-sampling mode, the sampling training frequency is set to be 500 times, the decision central in the 500 times of self-sampling learns the exercise habits and the exercise distribution modes of individuals, and the weekly target short-term exercise plan recommendation based on the exercise habits is completed.

3) Real-time adjustment of motion recommendations

The fixed individual exercise recommendation adaptation table cannot adapt to the problem of low coincidence degree of the prescription target planning and the actual exercise caused by weather, field and even physiological physical reasons existing in the exercise adaptation in real time. Aiming at the problems, the moving target real-time adjustment based on interactive feedback carries out weekly remaining planning adjustment according to the real moving target completion quantity, carries out planning sequence reconstruction and records the actual moving situation for the movement habit change.

Firstly, actual motion tracking is carried out according to a time sequence, and the feedback of the actual motion of the individual is waited at the motion recommendation time node and the supplement of the additional motion amount of the individual is received at the non-recommendation motion time node. When the motion amount change time node exists, the decision center reconstructs the decision state, takes the total amount of new residual motion as the node state and performs sequence sampling according to individual habits again, and the new sequence replaces the motion recommendation sequence after the motion amount change, so that real-time recommendation and updating can be realized according to the change of the actual motion condition.

4) Dynamic update of exercise habits

The exercise habit dynamic update realizes the dynamic adjustment of the habit record and the decision machine parameters in a week unit from the long-term habit and the short-term habit. The exercise habit updating needs to take the actual weekly completion amount of the individual in the exercise recommendation as the new individual habit and update the habit sequence. Taking actual exercise situation as new habit update easily takes accidental environmental factors as habit consideration, and the habit update state of an individual cannot be completely represented, so that weighting update needs to be performed on the basis of keeping partial long-term habits, and partial short-term habits are learned, and the calculation is as follows:

T1:n=0.9T1:n+0.1R1:n (5)

wherein R is1:nUpdating part of the short-term habits on the basis of keeping most of the long-term habits is completed by respectively setting the updating weights to 0.9 and 0.1 for the actual exercise amount sequence of the individual. And the motion habit representation is updated to adjust parameters of the decision machine through repeated sampling so as to realize dynamic update of the decision machine based on habit change.

The technical conception of the invention is as follows: firstly, a time period habit characterization vector is constructed aiming at the exercise habits of individuals in a week, and the vector characterizes the distribution of the exercise intensity and the exercise time of the individuals in different time nodes in the week. A decision machine based on a gradient strategy algorithm is then used to learn individual habits and decisions for motion distribution at different motion residuals and motion times. The decision machine realizes the parameter training of the motion decision machine by self-sampling motion recommendation sequence and minimizing the difference between the self-sampling sequence and the motion habit sequence. And finally, reconstructing a motion sequence by feeding back the real-time motion state of the individual to realize motion recommendation updating, and meanwhile, on the basis of keeping partial long-term habits, using the actual motion state as a short-term habit updating habit vector and a decision machine parameter to realize a self-adaptive motion recommendation algorithm based on individual habit change.

The invention has the advantages that: firstly, the invention provides a method for representing the intra-week habits of the total exercise amount, which represents the individual exercise habits in a form of sequence tracks, and has important functions and reference significance for tasks such as exercise prescription recommendation and the like. Secondly, the invention provides a decision machine based on time nodes in a week for making personalized motion time and motion intensity decisions, wherein the decision machine is different from a traditional learning distributed neural network, learns a motion decision mode based on individual habits, samples and updates decisions according to a form of less samples and self-training, improves the decision accuracy on the basis of reducing interaction, and realizes personalized short-term motion adaptation based on individuals. Finally, the motion recommendation algorithm supports dynamic motion recommendation adaptation and adaptive motion habit adjustment, dynamically allocates the weekly remaining targets according to the actual completion condition of the individual, and realizes habit updating and parameter correction based on the long-term habit and the actual short-term habit completion. Therefore, the invention considers dynamic change and adaptive updating no matter the model architecture or the model function setting is realized, and the invention can be effectively applied to the recommendation of running and the like of different individuals.

Drawings

FIG. 1 is a flow chart of an implementation of the method of the present invention;

FIG. 2 is a detailed diagram of the adaptive decision machine of the present invention;

FIG. 3 is a habit dynamic update process of the present invention;

the specific implementation method comprises the following steps:

according to the attached figure 1, the process completely self-adaptively recommends a decision grouping method, which comprises the following steps:

1) representing a motion habit track;

1.1) representing time habits;

the motion time habit in the invention is expressed in the form of vector track, packetIncluding individual exercise time and exercise time periods. The vector tracks are divided into time periods according to hours, and then the time periods form a track sequence with the length of n hours according to the time sequenceThereinRepresenting the average exercise time of the individual over the hour period, is calculated by equation (1):

wherein m is the individual habit, the number of weeks is calculated, and the m-week exercise data is used as the individual exercise time habit. x is the number ofi,nThe exercise duration in the nth hour of the ith week. The mean duration of the m-cycle motion data is characterized by taking serialization as time habits, and the motion time habit similarity can be measured by a track.

1.2) strength habit characterization;

the intensity habit characterization is similar to the time habit characterization, and forms a track sequence with the length of n hours according to the time sequenceThereinRepresents the average exercise intensity of the individual over the hour period, as calculated by equation (1):

wherein m is the individual habit, the number of weeks is calculated, and the m-week exercise data is used as the individual exercise intensity habit. x is the number ofi,nThe average exercise heart rate in the nth hour of the ith week is directly hooked with the intensity, the higher the heart rate is, the stronger the exercise load of the individual is represented, and the higher the heart rate is, the exercise load can be directly representedIs considered to be reflected by the individual's exercise intensity. The mean duration of the m-cycle motion data is characterized by taking serialization as a time habit, and the trajectory can measure the similarity of the motion intensity habit.

1.3) general characterization of motion

The motion habit is represented by the motion total amount, the motion total amount is distributed in the time taking the week as a unit and is represented by a vector track sequence, the motion total amount is calculated by the product of the motion intensity and the motion time, and the motion total amount calculation formula is as follows:

wherein T is1:nAnd characterizing vectors for the total amount of the individual habit motions, wherein the vectors represent the distribution condition of the individual motion amount in one week, and the habit distribution condition is organized in a track form and serves as a reference track for fine-grained recommendation and a decision body learning standard to realize motion distribution adaptation and recommendation.

2) Constructing a motion habit adaptation model based on an enhanced gradient strategy algorithm;

2.1) motion habit adaptation model overall architecture;

the motion habit adaptation model is mainly divided into two models: a self-training reinforced decision machine and a personalized recommendation model.

The self-training exercise configuration model is a decision center, which mainly learns detailed exercise decision characteristics from individual exercise total amount habits and realizes individual exercise recommendation in different exercise requirements and different time periods. The model simulates individual thinking modes aiming at movement and simulates the decision process of the individual, and a self-sampling and self-training recommendation decision model is formed. This mode reduces model requirement input and enables fast adaptation to individual motor habits.

And the personalized recommendation model completes the generation of the recommendation motion in a track distribution mode on the basis of the decision model. And the recommendation model implements decision making for each time point and implements subsequent decision making by changing motion distribution state in the decision making. The decision track is the recommended exercise prescription based on the exercise habit.

Generally speaking, the model kernel is a decision machine based on a gradient strategy, and the decision machine analyzes each fine-grained state and determines the movement arrangement condition of the state. The decision machine learning motion arrangement mode aiming at different time and required motion quantity comprises two parts of decision machine construction and sampling training. And the personalized recommendation model forms a motion adaptive trajectory vector aiming at the pre-trained decision machine, and the decision machine determines the distributed motion amount in each time point and completes the overall trajectory configuration to form a complete habit-based motion recommendation scheme.

2.2) building a strengthened decision machine based on a gradient strategy;

the reinforced decision machine learns the individual exercise habits and simulates the decision idea of the individual in the exercise distribution to finish the rapid exercise adaptation. The enhanced decision machine defines a motion state and takes the motion state as an input. The motion state can be understood as a period position in the motion trail and the motion residual task amount, and is characterized by using a sequence. The current time interval is the position information of the decision time in the whole track sequence and is respectively marked as 0-n; the motion residual task amount is filled in with a numerical value, which represents the total number of assignable motion amounts in a decision time. Namely, the decision machine inputs a 1 × 2 state sequence, and the sequence contains sampling information of the human brain in the decision motion process. And the decision machine state sequence respectively adopts two same feedforward networks to complete the selection of the motion time and the motion intensity. The network comprises four layers, wherein the first layer is an input sequence; the second layer is an up-sampling full-connection layer containing 8 neurons, and the relu is used as an activation function to improve the nonlinear learning ability; the third layer is an up-sampling full-connection layer containing 16 neurons, and relu is also adopted as an activation function to improve the generalization learning capability of the model. The fourth layer is the output layer, contains 9 neurons and uses the softmax activation function to characterize the probability of selecting the time and intensity of the movement, respectively. In the exercise time selection, the recommendation of average 60 minutes per day MVPA (moderate to severe exercise) is divided in the exercise guideline determined by the world health organization, so that each output neuron in the exercise time is characterized by 8 exercise times from 20 minutes to 60 minutes at 5-minute intervals, and contains exercise zero values without assigned exercise time. The exercise intensity is characterized by a heart rate, and the heart rate range from the exercise starting value 120 to the exercise early warning heart rate 160 of the healthy people is divided into 8 heart rates and a zero value without exercise allocation at intervals of 5 as the minimum interval, and the heart rates and the zero value are used as the exercise intensity. The decision center learns individual exercise habits and distribution thoughts on the basis of residual exercise amount and week time period, and self-adaptive exercise decision is realized.

2.3) personalized sports recommendation model

The core of the personalized motion recommendation model is a gradient strategy strengthening decision machine, the total amount of one-cycle training of an individual is input, the decision machine self-sampling training is carried out by a Markov decision process, a cycle motion recommendation table is output, and whether each time period moves or not, the movement intensity and the movement time are specified. The Markov decision process causes the decision machine to move in time sequence on the sampling sequence, the process sequence is S ═ { S }1,a1,s2,a2,…,sn,anIn which s isiRepresenting the state of the ith time node, i.e., the time node position and the total amount of motion remaining. And a isiAnd distributing a strategy for the motion generated by the decision machine according to the state information. Each policy assignment may change state information, affecting subsequent decisions, and execution of sequential decisions from the n states may generate a motion assignment for each time point. Different from the Markov decision process, the reward in the decision process is replaced by sampling distribution motion and individual habit motion errors, the mean square error of the distributed sampling sequence and the individual habit sequence can be calculated to calculate the training error of the decision machine, and if the sampling sequence does not finish the distribution of the total amount of work, the error is penalized to improve the ability of the decision machine to complete the distribution of all motion amounts, and the error calculation is defined by the following formula:

wherein loss is a loss function of parameter training, T represents an individual real habit sequence, P represents a sampling sequence, and G is the total input cycle movement. And the first half part of the loss function restricts the similarity of the distribution mode and the individual habit, and when the penalty is increased in the second half part, the decision machine is promoted to completely distribute the set target task amount of the individual. In the process, a central decision machine is repeatedly trained in a self-sampling mode, the sampling training frequency is set to be 500 times, the decision central in the 500 times of self-sampling learns the exercise habits and the exercise distribution modes of individuals, and the weekly target short-term exercise plan recommendation based on the exercise habits is completed.

3) Real-time adjustment of motion recommendations

The fixed individual exercise recommendation adaptation table cannot adapt to the problem of low coincidence degree of the prescription target planning and the actual exercise caused by weather, field and even physiological physical reasons existing in the exercise adaptation in real time. Aiming at the problems, the moving target real-time adjustment based on interactive feedback carries out weekly remaining planning adjustment according to the real moving target completion quantity, carries out planning sequence reconstruction and records the actual moving situation for the movement habit change.

Firstly, actual motion tracking is carried out according to a time sequence, and the feedback of the actual motion of the individual is waited at the motion recommendation time node and the supplement of the additional motion amount of the individual is received at the non-recommendation motion time node. When the motion amount change time node exists, the decision center reconstructs the decision state, takes the total amount of new residual motion as the node state and performs sequence sampling according to individual habits again, and the new sequence replaces the motion recommendation sequence after the motion amount change, so that real-time recommendation and updating can be realized according to the change of the actual motion condition.

4) Dynamic update of exercise habits

The exercise habit dynamic update realizes the dynamic adjustment of the habit record and the decision machine parameters in a week unit from the long-term habit and the short-term habit. The exercise habit updating needs to take the actual weekly completion amount of the individual in the exercise recommendation as the new individual habit and update the habit sequence. Taking actual exercise situation as new habit update easily takes accidental environmental factors as habit consideration, and the habit update state of an individual cannot be completely represented, so that weighting update needs to be performed on the basis of keeping partial long-term habits, and partial short-term habits are learned, and the calculation is as follows:

T1:n=0.9T1:n+0.1R1:n (5)

wherein R is1:nUpdating part of the short-term habits on the basis of keeping most of the long-term habits is completed by respectively setting the updating weights to 0.9 and 0.1 for the actual exercise amount sequence of the individual. And the motion habit representation is updated to adjust parameters of the decision machine through repeated sampling so as to realize dynamic update of the decision machine based on habit change.

In the actual evaluation process, n can be an integer multiple of 168, and is generally taken as a one-month sampling time base.

The technical conception of the invention is as follows: firstly, a time period habit characterization vector is constructed aiming at the exercise habits of individuals in a week, and the vector characterizes the distribution of the exercise intensity and the exercise time of the individuals in different time nodes in the week. A decision machine based on a gradient strategy algorithm is then used to learn individual habits and decisions for motion distribution at different motion residuals and motion times. The decision machine realizes the parameter training of the motion decision machine by self-sampling motion recommendation sequence and minimizing the difference between the self-sampling sequence and the motion habit sequence. And finally, reconstructing a motion sequence by feeding back the real-time motion state of the individual to realize motion recommendation updating, and meanwhile, on the basis of keeping partial long-term habits, using the actual motion state as a short-term habit updating habit vector and a decision machine parameter to realize a self-adaptive motion recommendation algorithm based on individual habit change.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:胸腔镜肺癌切除术中转开胸风险诊断预测模型及构建系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!