Robot motion analysis method and device, readable storage medium and robot

文档序号:1808204 发布日期:2021-11-09 浏览:9次 中文

阅读说明:本技术 一种机器人运动分析方法、装置、可读存储介质及机器人 (Robot motion analysis method and device, readable storage medium and robot ) 是由 白杰 葛利刚 陈春玉 刘益彰 罗秋月 周江琛 于 2021-07-20 设计创作,主要内容包括:本申请属于机器人技术领域,尤其涉及一种机器人运动分析方法、装置、计算机可读存储介质及机器人。所述方法包括:获取机器人的目标关节在并联构型下的第一关节角度;使用预设的正运动学分析模型对所述第一关节角度进行处理,得到所述目标关节在串联构型下的第二关节角度;所述正运动学分析模型为由预设的第一训练样本集合训练得到的深度学习模型,且所述第一训练样本集合为根据逆运动学分析过程所构建的集合。通过本申请,使用深度学习模型来进行正运动学分析过程,相比于现有的数值法计算方法,有效降低了计算复杂度。(The application belongs to the technical field of robots, and particularly relates to a robot motion analysis method and device, a computer-readable storage medium and a robot. The method comprises the following steps: acquiring a first joint angle of a target joint of the robot in a parallel configuration; processing the first joint angle by using a preset positive kinematics analysis model to obtain a second joint angle of the target joint in a serial configuration; the positive kinematics analysis model is a deep learning model obtained by training a preset first training sample set, and the first training sample set is a set constructed according to an inverse kinematics analysis process. Through the application, the forward kinematics analysis process is performed by using the deep learning model, and compared with the existing numerical method calculation method, the calculation complexity is effectively reduced.)

1. A robot motion analysis method, comprising:

acquiring a first joint angle of a target joint of the robot in a parallel configuration;

processing the first joint angle by using a preset positive kinematics analysis model to obtain a second joint angle of the target joint in a serial configuration; the positive kinematics analysis model is a deep learning model obtained by training a preset first training sample set, and the first training sample set is a set constructed according to an inverse kinematics analysis process.

2. The robot motion analysis method according to claim 1, further comprising:

acquiring joint motion parameters of the target joint;

processing the joint motion parameters of the target joint by using a preset inverse dynamics analysis model to obtain the driving moment of the target joint; the inverse dynamics analysis model is a deep learning model obtained by training a preset second training sample set, and the second training sample set is a set constructed according to a positive dynamics analysis process.

3. The robot motion analysis method according to claim 1, further comprising, before processing the first joint angle using a preset positive kinematics analysis model:

determining a range of motion of the target joint in a tandem configuration;

selecting a first number of serial joint angles in the range of motion;

calculating parallel joint angles corresponding to the angles of each series joint according to the inverse kinematics analysis process;

constructing the first training sample set; the first training sample set comprises a first number of training samples, and each training sample comprises a group of serial joint angles and corresponding parallel joint angles;

and training a deep learning model in an initial state by using the first training sample set, and taking the trained deep learning model as the positive kinematics analysis model.

4. The robot motion analysis method according to claim 3, wherein the deep learning model is a generative confrontation network model including a first generator and a first discriminator;

the training of the deep learning model of the initial state using the first set of training samples comprises:

for each training sample in the first training sample set, processing the parallel joint angle of the sample by using the first generator to obtain a first generation result;

and performing a model training process by using the first discriminator according to the first generation result of the sample and the angle of the tandem joint.

5. The robot motion analysis method according to claim 2, further comprising, before processing the joint motion parameters of the target joint using a preset inverse kinematics analysis model:

acquiring a motion trail record of the target joint;

selecting a second number of motion track points in the motion track record, wherein each motion track point comprises a driving moment, a joint speed and a joint acceleration;

calculating joint angles corresponding to each motion track point according to the positive dynamics analysis process;

constructing the second training sample set; the second training sample set comprises a second number of training samples, and each training sample comprises a group of driving moments and corresponding joint motion parameters; the joint motion parameters comprise joint angle, joint speed and joint acceleration;

and training the deep learning model in the initial state by using the second training sample set, and taking the trained deep learning model as the inverse dynamics analysis model.

6. The robot motion analysis method according to claim 5, wherein the deep learning model is a generative confrontation network model including a second generator and a second discriminator;

the training of the deep learning model of the initial state using the second set of training samples comprises:

for each training sample in the second training sample set, processing the joint motion parameters of the sample by using the second generator to obtain a second generation result;

and performing a model training process by using the second discriminator according to the second generation result and the driving moment of the sample.

7. The robot motion analysis method according to any one of claims 1 to 6, wherein the processing the first joint angle using a preset positive kinematics analysis model to obtain a second joint angle of the target joint in a tandem configuration comprises:

and inputting the first joint angle into the positive kinematics analysis model for processing, and taking the output of the positive kinematics analysis model after processing as the second joint angle.

8. A robot motion analysis apparatus, comprising:

the first joint angle acquisition module is used for acquiring a first joint angle of a target joint of the robot in a parallel configuration;

the positive kinematics analysis module is used for processing the first joint angle by using a preset positive kinematics analysis model to obtain a second joint angle of the target joint under a serial configuration; the positive kinematics analysis model is a deep learning model obtained by training a preset first training sample set, and the first training sample set is a set constructed according to an inverse kinematics analysis process.

9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the robot motion analysis method according to any one of claims 1 to 7.

10. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, carries out the steps of the robot motion analysis method according to any of claims 1 to 7.

Technical Field

The application belongs to the technical field of robots, and particularly relates to a robot motion analysis method and device, a computer-readable storage medium and a robot.

Background

A key point in the research of the humanoid robot is the control problem of the parallel configuration, for the parallel configuration in the robot, the analytic solution of the inverse kinematics can be easily and directly deduced through the geometry or DH method, but the positive kinematics of the parallel configuration is generally calculated by adopting a numerical method, is based on the Jacobian matrix and is iteratively approximated through a Newton-Raphson method, and the calculation complexity is high.

Disclosure of Invention

In view of this, embodiments of the present application provide a robot motion analysis method, an apparatus, a computer-readable storage medium, and a robot, so as to solve the problem that the existing positive kinematics analysis process has a high computational complexity.

A first aspect of an embodiment of the present application provides a robot motion analysis method, which may include:

acquiring a first joint angle of a target joint of the robot in a parallel configuration;

processing the first joint angle by using a preset positive kinematics analysis model to obtain a second joint angle of the target joint in a serial configuration; the positive kinematics analysis model is a deep learning model obtained by training a preset first training sample set, and the first training sample set is a set constructed according to an inverse kinematics analysis process.

In a specific implementation of the first aspect, the robot motion analysis method may further include:

acquiring joint motion parameters of the target joint;

processing the joint motion parameters of the target joint by using a preset inverse dynamics analysis model to obtain the driving moment of the target joint; the inverse dynamics analysis model is a deep learning model obtained by training a preset second training sample set, and the second training sample set is a set constructed according to a positive dynamics analysis process.

In a specific implementation of the first aspect, before the processing the first joint angle using a preset positive kinematics analysis model, the robot motion analysis method may further include:

determining a range of motion of the target joint in a tandem configuration;

selecting a first number of serial joint angles in the range of motion;

calculating parallel joint angles corresponding to the angles of each series joint according to the inverse kinematics analysis process;

constructing the first training sample set; the first training sample set comprises a first number of training samples, and each training sample comprises a group of serial joint angles and corresponding parallel joint angles;

and training a deep learning model in an initial state by using the first training sample set, and taking the trained deep learning model as the positive kinematics analysis model.

In a specific implementation of the first aspect, the deep learning model may be a generative confrontation network model including a first generator and a first discriminator;

the training of the deep learning model of the initial state using the first training sample set may include:

for each training sample in the first training sample set, processing the parallel joint angle of the sample by using the first generator to obtain a first generation result;

and performing a model training process by using the first discriminator according to the first generation result of the sample and the angle of the tandem joint.

In a specific implementation of the first aspect, before processing the joint motion parameters of the target joint using a preset inverse dynamics analysis model, the robot motion analysis method may further include:

acquiring a motion trail record of the target joint;

selecting a second number of motion track points in the motion track record, wherein each motion track point comprises a driving moment, a joint speed and a joint acceleration;

calculating joint angles corresponding to each motion track point according to the positive dynamics analysis process;

constructing the second training sample set; the second training sample set comprises a second number of training samples, and each training sample comprises a group of driving moments and corresponding joint motion parameters; the joint motion parameters comprise joint angle, joint speed and joint acceleration;

and training the deep learning model in the initial state by using the second training sample set, and taking the trained deep learning model as the inverse dynamics analysis model.

In a specific implementation of the first aspect, the deep learning model is a generative confrontation network model including a second generator and a second discriminator;

the training of the deep learning model of the initial state using the second training sample set may include:

for each training sample in the second training sample set, processing the joint motion parameters of the sample by using the second generator to obtain a second generation result;

and performing a model training process by using the second discriminator according to the second generation result and the driving moment of the sample.

In a specific implementation of the first aspect, the processing the first joint angle using a preset positive kinematic analysis model to obtain a second joint angle of the target joint in a tandem configuration may include:

and inputting the first joint angle into the positive kinematics analysis model for processing, and taking the output of the positive kinematics analysis model after processing as the second joint angle.

A second aspect of an embodiment of the present application provides a robot motion analysis apparatus, which may include:

the first joint angle acquisition module is used for acquiring a first joint angle of a target joint of the robot in a parallel configuration;

the positive kinematics analysis module is used for processing the first joint angle by using a preset positive kinematics analysis model to obtain a second joint angle of the target joint under a serial configuration; the positive kinematics analysis model is a deep learning model obtained by training a preset first training sample set, and the first training sample set is a set constructed according to an inverse kinematics analysis process.

In a specific implementation of the second aspect, the robot motion analysis apparatus may further include:

the joint motion parameter acquisition module is used for acquiring the joint motion parameters of the target joint;

the inverse dynamics analysis module is used for processing joint motion parameters of the target joint by using a preset inverse dynamics analysis model to obtain a driving moment of the target joint; the inverse dynamics analysis model is a deep learning model obtained by training a preset second training sample set, and the second training sample set is a set constructed according to a positive dynamics analysis process.

In a specific implementation of the second aspect, the robot motion analysis apparatus may further include:

a range of motion determination module to determine a range of motion of the target joint in a tandem configuration;

a serial joint angle selection module for selecting a first number of serial joint angles in the range of motion;

the inverse kinematics analysis module is used for calculating parallel joint angles corresponding to the serial joint angles according to the inverse kinematics analysis process;

a first training sample set constructing module, configured to construct the first training sample set; the first training sample set comprises a first number of training samples, and each training sample comprises a group of serial joint angles and corresponding parallel joint angles;

and the positive kinematics analysis model training module is used for training the deep learning model in the initial state by using the first training sample set and taking the trained deep learning model as the positive kinematics analysis model.

In a specific implementation of the second aspect, the deep learning model is a generative confrontation network model including a first generator and a first discriminator;

the positive kinematics analysis model training module may include:

the first generator processing unit is used for processing the parallel joint angle of each training sample in the first training sample set by using the first generator to obtain a first generation result;

and the first discriminator processing unit is used for carrying out a model training process by using the first discriminator according to the first generation result of the sample and the angle of the tandem joint.

In a specific implementation of the second aspect, the robot motion analysis apparatus may further include:

the motion track record acquisition module is used for acquiring the motion track record of the target joint;

the motion track point selecting module is used for selecting a second number of motion track points in the motion track record, wherein each motion track point comprises a driving moment, a joint speed and a joint acceleration;

the positive dynamics analysis module is used for calculating joint angles corresponding to each motion track point according to the positive dynamics analysis process;

the second training sample set constructing module is used for constructing the second training sample set; the second training sample set comprises a second number of training samples, and each training sample comprises a group of driving moments and corresponding joint motion parameters; the joint motion parameters comprise joint angle, joint speed and joint acceleration;

and the inverse dynamics analysis model training module is used for training the deep learning model in the initial state by using the second training sample set, and taking the deep learning model after training as the inverse dynamics analysis model.

In a specific implementation of the second aspect, the deep learning model is a generative confrontation network model including a second generator and a second discriminator;

the inverse dynamics analysis model training module may include:

the second generator processing unit is used for processing the joint motion parameters of each training sample in the second training sample set by using the second generator to obtain a second generation result;

and the second discriminator processing unit is used for carrying out a model training process by using the second discriminator according to the second generation result and the driving moment of the sample.

In a specific implementation of the second aspect, the positive kinematics analysis module is specifically configured to input the first joint angle into the positive kinematics analysis model for processing, and output a processed output of the positive kinematics analysis model as the second joint angle.

A third aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of any one of the robot motion analysis methods described above.

A fourth aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of any one of the robot motion analysis methods when executing the computer program.

A fifth aspect of embodiments of the present application provides a computer program product, which, when run on a robot, causes the robot to perform the steps of any of the robot motion analysis methods described above.

Compared with the prior art, the embodiment of the application has the advantages that: the method includes the steps that a first joint angle of a target joint of a robot in a parallel configuration is obtained; processing the first joint angle by using a preset positive kinematics analysis model to obtain a second joint angle of the target joint in a serial configuration; the positive kinematics analysis model is a deep learning model obtained by training a preset first training sample set, and the first training sample set is a set constructed according to an inverse kinematics analysis process. According to the embodiment of the application, the forward kinematics analysis process is performed by using the deep learning model, and compared with the existing numerical method calculation method, the calculation complexity is effectively reduced.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.

FIG. 1 is a schematic diagram of a coordinate system used in an embodiment of the present application;

FIG. 2 is a diagram showing the correspondence between coordinate axes and rotational directions;

FIG. 3 is a schematic view of a parallel mechanism;

FIG. 4 is a schematic view of a knee-ankle parallel mechanism;

FIG. 5 is a schematic diagram of the forward and reverse kinematics analysis process;

FIG. 6 is a schematic diagram of a generation of a countermeasure network model;

FIG. 7 is a schematic flow diagram of a process for constructing a positive kinematic analysis model;

FIG. 8 is a schematic flow diagram of a positive kinematics analysis process;

FIG. 9 is a schematic of the forward and reverse kinetic analysis processes;

FIG. 10 is a schematic flow diagram of a process for constructing an inverse dynamics analysis model;

FIG. 11 is a schematic flow diagram of an inverse dynamics analysis process;

fig. 12 is a structural view of an embodiment of a robot motion analysis apparatus according to an embodiment of the present application;

fig. 13 is a schematic block diagram of a robot in an embodiment of the present application.

Detailed Description

In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.

It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.

As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".

In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.

In the embodiment of the present application, the global coordinate system Σ shown in fig. 1 may be first establishedwIn the coordinate system, the front direction of the robot is an x axis, the lateral direction is a y axis, and the longitudinal direction is a z axis.

FIG. 2 is a diagram showing the correspondence between coordinate axes and the rotation direction, and the rotation direction around the x-axis is rxDenoted roll angle; direction of rotation about the y-axis being ryDenoted pitch angle; direction of rotation about the z-axis being rzAnd is denoted as yaw angle (yaw angle).

In general, each leg of the robot may include a hip joint, a knee joint, and an ankle joint. There is a local coordinate system at each joint, the initial state of the local coordinate system being consistent with the global coordinate system. In tandem configuration, the hip joint H of the left leg1And right leg hip joint H2Three degrees of freedom are respectively provided, the left leg and knee joint K can respectively rotate around the x axis, the y axis and the z axis of a local coordinate system of the three rotary steering engines1And the knee joint K of the right leg2Respectively has a degree of freedom, can rotate around the y-axis of a local coordinate system thereof through a rotary steering engine, and has a left leg ankle joint A1And the right leg ankle joint A2The two degrees of freedom are respectively provided, and the two rotary steering engines can respectively rotate around the x axis and the y axis of a local coordinate system.

In the parallel configuration, a hip-knee parallel mechanism (left drawing) or a knee-ankle parallel mechanism (right drawing) as shown in fig. 3 may be provided, in which the numbers indicate degrees of freedom of the joints, and the description below will be given by taking the knee-ankle parallel mechanism as an example, and the hip-knee parallel mechanism will be similar thereto. In the knee-ankle parallel mechanism shown in fig. 4, the hip joint and the knee joint are the same as in the case of the serial configuration, and a local coordinate system whose initial state coincides with the global coordinate system is established at the ankle joint (O).

The robot model generally used at present is based on a parallel configuration, but the control algorithm is based on a series configuration, so that the equivalence relation from the parallel configuration to the series configuration can be established firstly in the embodiment of the application.

Taking a single leg as an example, the joint angle in the parallel configuration can be expressed as:

θ=(θ1,θ2,θ3,θ4,θ5,θ6)T

wherein, theta1,θ2,θ3The joint angle of the hip joint in three degrees of freedom, theta4The joint angle of the knee joint in one degree of freedom, θ5,θ6The joint angle of the ankle joint in two degrees of freedom.

Accordingly, the joint angle in the tandem configuration may be expressed as:

q=(q1,q2,q3,q4,q5,q6)T

wherein q is1,q2,q3The joint angle of the hip joint in three degrees of freedom, q4For the joint angle of the knee joint in one degree of freedom, q5,q6The joint angle of the ankle joint in two degrees of freedom. In the existing robot control algorithm, the waist pose p can be based ontorso、RtorsoAnd the position and posture p of the footfoot、RfootAnd calculating to obtain q.

In fact, for the hip joint, θ1=q1,θ2=q2,θ3=q3(ii) a For the knee joint, θ4=q4(ii) a And for the ankle joint, theta5,θ6And q is5,q6And are not the same.

In the embodiment of the present application, as shown in fig. 5, the relationship of θ5,θ6Solving for q5,q6As a positive kinematics analysis process, will be based on q5,q6Solving for theta5,θ6The process of (a) is used as an inverse kinematics analysis process.

In the embodiment of the present application, a deep learning model may be used to perform the positive kinematics analysis process, and what type of deep learning model is specifically used may be set according to actual conditions, where a generated confrontation network (GAN) model is preferably used. For generating a confrontational network model, given a batch of samples, a system can be trained to generate similar samples, thereby solving the problem of insufficient training data.

FIG. 6 is a schematic diagram of generation of a countermeasure network model that can include a generator G for training learning from a low-dimensional potential vector z (z p) and a discriminator Dz(z) independent co-distribution), mapping g (z) to real data x; the discriminator D is used for training learning to distinguish that the data comes from the real data x (x-p)data(x) Or data g (z) generated by the generator; generating a confrontation network model, and adjusting a generator G and a discriminator D through an optimization process, wherein an objective function is as follows:

as shown in fig. 7, in a specific implementation of the embodiment of the present application, the construction process of the positive kinematic analysis model may include the following steps:

and step S701, determining the motion range of the target joint in the serial configuration.

Taking the target joint as the ankle joint as an example, the motion range can be recorded as: q. q.s5∈[q5min,q5max],q6∈[q6min,q6max]Wherein q is5min,q5max,q6min,q6maxThe specific values of the preset threshold values can be set according to actual conditions.

Step S702, a first number of serial joint angles are selected in the motion range.

Selecting a q in the range of motion5And a value of (a) and a q6The two values can form oneThe angle of each serial joint. The specific value of the first number can be set according to actual conditions, and generally, in order to ensure the accuracy of the model obtained by training, enough serial joint angles should be acquired as much as possible.

When sampling is performed, different sampling manners can be adopted according to practical situations, including but not limited to random sampling, uniform sampling and the like.

And step S703, calculating parallel joint angles corresponding to the serial joint angles according to the inverse kinematics analysis process.

For angle q according to serial joint5,q6Solving for parallel joint angle theta5,θ6In the inverse kinematics analysis process, any inverse kinematics analysis method in the prior art can be selected according to actual conditions, which is not described in detail in this embodiment.

And step S704, constructing a first training sample set.

The first set of training samples includes a first number of training samples, each training sample including a set of serial joint angles and corresponding parallel joint angles.

Step S705, training the deep learning model in the initial state by using the first training sample set, and taking the deep learning model after training as a positive kinematics analysis model.

Generation of a confrontation network model, which may include a first generator and a first discriminator, is preferably employed herein. In the training process, for each training sample in a first training sample set, a first generator is used for processing the parallel joint angle of the sample to obtain a first generation result, then a first discriminator is used for performing a model training process according to the first generation result and the serial joint angle of the sample, and finally a trained positive kinematics analysis model is obtained.

After training to obtain a positive kinematics analysis model, then a positive kinematics analysis of the robot can be performed by the process as shown in fig. 8:

step S801, acquiring a first joint angle of a target joint of the robot in a parallel configuration.

And S802, processing the first joint angle by using a positive kinematics analysis model to obtain a second joint angle of the target joint in a serial configuration.

Specifically, the first joint angle may be input to the positive kinematic analysis model for processing, and the output after the positive kinematic analysis model is processed may be used as the second joint angle.

In the embodiment of the application, the forward kinematics analysis process is performed by using the deep learning model, and compared with the existing numerical method calculation method, the calculation complexity is effectively reduced.

As shown in FIG. 9, in one implementation of the embodiments of the present application, the drive torque τ and joint speed may also be varied according to the jointAnd joint accelerationThe process of solving the joint angle theta is used as a positive dynamics analysis process, and the joint angle theta and the joint speed are used as the basisAcceleration of jointThe process of solving the driving torque τ of the joint is taken as an inverse dynamics analysis process.

In the embodiment of the application, a deep learning model can be used for performing an inverse dynamics analysis process, and specifically, what type of deep learning model is adopted can be set according to actual conditions, and a generation confrontation network model is preferably adopted here to solve the problem of insufficient training data.

As shown in fig. 10, in a specific implementation of the embodiment of the present application, the process of constructing the inverse dynamical analysis model may include the following steps:

and step S1001, acquiring a motion trail record of the target joint.

And step S1002, selecting a second number of motion track points in the motion track record.

Each motion track point comprises a driving moment, a joint speed and a joint acceleration.

The specific value of the second number can be set according to the actual situation, and generally, in order to ensure the accuracy of the model obtained by training, enough motion track points should be collected as much as possible.

When sampling is performed, different sampling manners can be adopted according to practical situations, including but not limited to random sampling, uniform sampling and the like.

And S1003, calculating joint angles corresponding to the motion track points according to the positive dynamics analysis process.

For joint velocity according to driving moment tauAnd joint accelerationIn the positive dynamics analysis process for solving the joint angle λ, any positive dynamics analysis method in the prior art can be selected according to actual conditions, and this embodiment is not described in detail again.

And step S1004, constructing a second training sample set.

The second training sample set comprises a second number of training samples, each training sample comprises a set of driving moments and corresponding joint motion parameters, and the joint motion parameters comprise joint angles, joint speeds and joint accelerations.

Step S1005, training the deep learning model in the initial state by using the second training sample set, and using the trained deep learning model as an inverse dynamics analysis model.

Generation of a countermeasure network model, which may include a second generator and a second discriminator, is preferably employed herein. In the training process, for each training sample in the second training sample set, firstly, the joint motion parameters of the sample are processed by using the second generator to obtain a second generation result, then, according to the second generation result and the driving moment of the sample, the model training process is carried out by using the second discriminator, and finally, the trained inverse dynamics analysis model is obtained.

After training to obtain the inverse kinematics analysis model, then the robot inverse kinematics analysis may be performed by the process as shown in fig. 11:

step S1101 is to acquire joint movement parameters of the target joint.

And step S1102, processing the joint motion parameters of the target joint by using an inverse dynamics analysis model to obtain the driving moment of the target joint.

Specifically, the joint motion parameters may be input to the inverse kinematics analysis model for processing, and the output processed by the inverse kinematics analysis model may be used as the driving torque.

In the embodiment of the application, the inverse dynamics analysis process is performed by using the deep learning model, and compared with the existing numerical method calculation method, the calculation complexity is effectively reduced.

It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.

Fig. 12 is a structural diagram of an embodiment of a robot motion analysis apparatus according to an embodiment of the present disclosure, which corresponds to the robot motion analysis method according to the foregoing embodiment.

In this embodiment, a robot motion analysis apparatus may include:

a first joint angle obtaining module 1201, configured to obtain a first joint angle of a target joint of the robot in a parallel configuration;

a positive kinematics analysis module 1202, configured to process the first joint angle using a preset positive kinematics analysis model to obtain a second joint angle of the target joint in a tandem configuration; the positive kinematics analysis model is a deep learning model obtained by training a preset first training sample set, and the first training sample set is a set constructed according to an inverse kinematics analysis process.

In a specific implementation of the embodiment of the present application, the robot motion analysis apparatus may further include:

the joint motion parameter acquisition module is used for acquiring the joint motion parameters of the target joint;

the inverse dynamics analysis module is used for processing joint motion parameters of the target joint by using a preset inverse dynamics analysis model to obtain a driving moment of the target joint; the inverse dynamics analysis model is a deep learning model obtained by training a preset second training sample set, and the second training sample set is a set constructed according to a positive dynamics analysis process.

In a specific implementation of the embodiment of the present application, the robot motion analysis apparatus may further include:

a range of motion determination module to determine a range of motion of the target joint in a tandem configuration;

a serial joint angle selection module for selecting a first number of serial joint angles in the range of motion;

the inverse kinematics analysis module is used for calculating parallel joint angles corresponding to the serial joint angles according to the inverse kinematics analysis process;

a first training sample set constructing module, configured to construct the first training sample set; the first training sample set comprises a first number of training samples, and each training sample comprises a group of serial joint angles and corresponding parallel joint angles;

and the positive kinematics analysis model training module is used for training the deep learning model in the initial state by using the first training sample set and taking the trained deep learning model as the positive kinematics analysis model.

In a specific implementation of the embodiment of the present application, the deep learning model is a generative confrontation network model including a first generator and a first discriminator;

the positive kinematics analysis model training module may include:

the first generator processing unit is used for processing the parallel joint angle of each training sample in the first training sample set by using the first generator to obtain a first generation result;

and the first discriminator processing unit is used for carrying out a model training process by using the first discriminator according to the first generation result of the sample and the angle of the tandem joint.

In a specific implementation of the embodiment of the present application, the robot motion analysis apparatus may further include:

the motion track record acquisition module is used for acquiring the motion track record of the target joint;

the motion track point selecting module is used for selecting a second number of motion track points in the motion track record, wherein each motion track point comprises a driving moment, a joint speed and a joint acceleration;

the positive dynamics analysis module is used for calculating joint angles corresponding to each motion track point according to the positive dynamics analysis process;

the second training sample set constructing module is used for constructing the second training sample set; the second training sample set comprises a second number of training samples, and each training sample comprises a group of driving moments and corresponding joint motion parameters; the joint motion parameters comprise joint angle, joint speed and joint acceleration;

and the inverse dynamics analysis model training module is used for training the deep learning model in the initial state by using the second training sample set, and taking the deep learning model after training as the inverse dynamics analysis model.

In a specific implementation of the embodiment of the present application, the deep learning model is a generative confrontation network model including a second generator and a second discriminator;

the inverse dynamics analysis model training module may include:

the second generator processing unit is used for processing the joint motion parameters of each training sample in the second training sample set by using the second generator to obtain a second generation result;

and the second discriminator processing unit is used for carrying out a model training process by using the second discriminator according to the second generation result and the driving moment of the sample.

In a specific implementation of the embodiment of the present application, the positive kinematics analysis module is specifically configured to input the first joint angle into the positive kinematics analysis model for processing, and use an output of the positive kinematics analysis model after processing as the second joint angle.

It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, modules and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.

Fig. 13 shows a schematic block diagram of a robot provided in an embodiment of the present application, and only a part related to the embodiment of the present application is shown for convenience of explanation.

As shown in fig. 13, the robot 13 of this embodiment includes: a processor 130, a memory 131 and a computer program 132 stored in the memory 131 and executable on the processor 130. The processor 130 implements the steps in the various robot motion analysis method embodiments described above when executing the computer program 132. Alternatively, the processor 130 implements the functions of the modules/units in the above device embodiments when executing the computer program 132.

Illustratively, the computer program 132 may be partitioned into one or more modules/units that are stored in the memory 131 and executed by the processor 130 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 132 in the robot 13.

Those skilled in the art will appreciate that fig. 13 is merely an example of a robot 13, and does not constitute a limitation of the robot 13, and may include more or fewer components than those shown, or some components in combination, or different components, for example, the robot 13 may also include input and output devices, network access devices, buses, etc.

The Processor 130 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The memory 131 may be an internal storage unit of the robot 13, such as a hard disk or a memory of the robot 13. The memory 131 may also be an external storage device of the robot 13, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the robot 13. Further, the memory 131 may also include both an internal storage unit and an external storage device of the robot 13. The memory 131 is used to store the computer program and other programs and data required by the robot 13. The memory 131 may also be used to temporarily store data that has been output or is to be output.

It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

In the embodiments provided in the present application, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/robot are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable storage medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable storage media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.

The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:机器人控制系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!