Robot vision servo and man-machine interaction hybrid control method based on weight distribution

文档序号:1790972 发布日期:2021-11-05 浏览:9次 中文

阅读说明:本技术 基于权重分配的机器人视觉伺服与人机交互混合操控方法 (Robot vision servo and man-machine interaction hybrid control method based on weight distribution ) 是由 黄攀峰 孙驰 刘正雄 马志强 于 2021-07-11 设计创作,主要内容包括:本发明涉及一种基于权重分配的机器人视觉伺服与人机交互混合操控方法,首先,分别建立视觉控制与人机交互控制的模型,计算出二者的控制输出,再由模糊控制器计算权重分配,将融合后的控制量输入到机器人关节伺服控制系统中,控制机器人运动。有益效果在于,通过动态的控制权重分配,将机器人视觉伺服控制和人机交互控制两种不同自主性的控制模式进行了融合,使得人的控制输入能够参与到机器人控制回路中,增加了系统控制输入的多样性和系统控制目标的开放性,本发明适合于应用到开放场景的视觉观察任务上。(The invention relates to a robot vision servo and man-machine interaction hybrid control method based on weight distribution. The method has the advantages that two control modes with different autonomy, namely the robot vision servo control and the human-computer interaction control, are fused through dynamic control weight distribution, so that the control input of a human can participate in a robot control loop, the diversity of system control input and the openness of a system control target are increased, and the method is suitable for being applied to a vision observation task of an open scene.)

1. A robot vision servo and man-machine interaction hybrid control method based on weight distribution is characterized by comprising the following steps:

step 1, modeling a visual control model, and calculating a control quantity upsilon output by a visual controllerp

Establishing a visual controller model

The model uses three characteristic points, and calculates the Jacobian matrix of the image for each characteristic point iAnd a control input upi

The following jacobian matrix J is usedpAnd a control input upRepresents:

calculating a control input up=Kppep

Wherein: the proportional coefficient of the proportional controller is Kpp

ep=pT-p,p=[u v]TFor the current pixel coordinate of the feature point, pT=[uT vT]T(ii) a Desired pixel coordinates for a given feature point:

computational expression of the image jacobian matrix:

the parameter of the given camera model is [ f ρ [ ]u ρv u v],

By means of a modelLeast square method for solving Z, v by on-line identification parameter estimationr、ωrThe real position and angle variation of the tail end of the robot;

step 2, modeling a human-computer interaction control model, and calculating a control quantity upsilon output by a human-computer interaction controllerhAnd an interaction force F:

the establishment of the human-computer interaction controller model comprises the following steps:is a control input;

interaction force F ═ KτTv -1vpIn which K isτIs the force feedback coefficient;

three-dimensional space position and change speed under the hand controller base coordinate system:

eh=[vhx vhz vhz ωhx ωhy ωhz]T=[vh ωh]T

transformation matrix T for converting given speed from hand controller base coordinate system to robot base coordinate systemv

Transformation matrix T for converting angular speed from hand controller base coordinate system to robot base coordinate systemω

And 3, obtaining weight distribution of control input by using a fuzzy control method:

synthesizing the two control quantities by linear weighted sum, and setting the weight of visual control as KαThe control amount of synthesis is:

υ=Kαυp+(1-Kαh

wherein KαAnd K in the previous stepτSolving using a fuzzy controller

S1=(Δp×Δθ)→Kα

The normalization parameter delta p of the pixel characteristic deviation of the n characteristic points and the normalization parameter delta theta of the included angle of the speed vector of the control quantity are calculated according to the following formula

S2=(Δp×Δθ)→Kτ

The amount of computation is blurred by Δ p from one input according to the blurring rule*And Δ θ*Respectively calculating:

performing anti-fuzzy on the output to obtain K in the control weight meeting the requirementαAnd Kτ

And 4, step 4: and 3, synthesizing the weighted Cartesian space control quantity in the step 3, performing joint servo tracking by taking the weighted Cartesian space control quantity as the Cartesian space pose variation quantity of the expected tail end input by the robot, and controlling the tail end position and the pose of the robot to change so that the image servo system can assist the control operation of a human and the observation of a target object.

Joint servo control quantity

JrThe method is a Jacobian matrix between the Cartesian space position and speed change and the joint space angle change of the robot.

2. The robot vision servo and human-computer interaction hybrid manipulation method based on weight distribution as claimed in claim 1, wherein: said S1As shown in the following table:

VSΔp SΔp PSΔp PBΔp BΔp VBΔp VSΔθ VBKα BKα BKα PBKα PBKα PSKα SΔθ BKα BKα PBKα PBKα PBKα PSKα PSΔθ BKα PBKα PBKα PSKα PSKα PSKα PBΔθ PBKα PSKα PSKα PSKα PSKα SKα BΔθ PBKα PSKα PSKα PSKα SKα SKα VBΔθ PSKα PSKα PSKα SKα SKα VSKα

3. the robot vision servo and human-computer interaction hybrid manipulation method based on weight distribution as claimed in claim 1, wherein: said S2As shown in the following table:

VSΔp SΔp PSΔp PBΔp BΔp VBΔp VSΔθ VSKτ SKτ SKτ PSKτ PSKτ PSKτ SΔθ SKτ SKτ PSKτ PSKτ PSKτ PBKτ PSΔθ SKτ PSKτ PSKτ PSKτ PBKτ PBKτ PBΔθ PSKτ PSKτ PBKτ PBKτ PBKτ BKτ BΔθ PSKτ PBKτ PBKτ PBKτ BKτ BKτ VBΔθ PBKτ PBKτ PBKτ BKτ BKτ VBKτ

Technical Field

The invention belongs to the field of robot control, relates to a robot vision servo and man-machine interaction hybrid control method based on weight distribution, and more particularly relates to a robot hybrid control method based on fuzzy controller distribution control weight and fusion vision servo and man-machine interaction control.

Background

Thanks to the development of camera hardware technology and computer level, the rapid development of the computer vision field enables machine vision to provide control for robots. Visual servoing is widely used in the fields of medical surgery, disaster relief, industrial workshops, space teleoperation and the like. Conventional visual servoing mainly uses a position-based visual servoing system, an image-based visual servoing system, or a visual servoing system in which images and positions are mixed.

The traditional visual servo scheme is mostly direct closed-loop servo, and servo operation is completed by a machine autonomously, or switching control is used for switching and executing two modes of visual servo and man-machine interaction control. Such methods are more applicable to industrial scenes, fine work or fixed task scenes, and when applied to visual observation activities in open scenes, such methods cannot meet the requirements for higher flexibility.

Disclosure of Invention

Technical problem to be solved

In order to avoid the defects of the prior art, the invention provides a robot vision servo and man-machine interaction hybrid control method based on weight distribution, a hybrid control strategy based on dynamic weight distribution is adopted, control input of hand-eye coordination and control input of vision servo are fused according to weighting to serve as expected input, so that the tail end of a robot can track an input instruction, deviation force feedback of dynamic coefficients is introduced to carry out man-machine interaction perception, and the dynamic weight method designed by the text realizes that an operator can realize tracking observation and sight line walking tasks of an object through man-machine interaction.

Technical scheme

A robot vision servo and man-machine interaction hybrid control method based on weight distribution is characterized by comprising the following steps:

step 1, modeling a visual control model, and calculating a control quantity upsilon output by a visual controllerp

Establishing a visual controller model

The model uses three characteristic points, and calculates the Jacobian matrix of the image for each characteristic point iAnd a control input upi

The following jacobian matrix J is usedpAnd a control input upRepresents:

calculating a control input up=Kppep

Wherein: the proportional coefficient of the proportional controller is Kpp

ep=pT-p,p=[u v]TFor the current pixel coordinate of the feature point, pT=[uT vT]T(ii) a Desired pixel coordinates for a given feature point:

computational expression of the image jacobian matrix:

the parameter of the given camera model is [ f ρ [ ]u ρv u v],

By means of a modelLeast square method for solving Z, v by on-line identification parameter estimationr、ωrThe real position and angle variation of the tail end of the robot;

step 2, modeling a human-computer interaction control model, and calculating a control quantity upsilon output by a human-computer interaction controllerhAnd an interaction force F:

the establishment of the human-computer interaction controller model comprises the following steps:is a control input;

interaction force F ═ KτTv -1vpIn which K isτIs the force feedback coefficient;

three-dimensional space position and change speed under the hand controller base coordinate system:

eh=[vhx vhz vhz ωhx ωhy ωhz]T=[vh ωh]T

transformation matrix T for converting given speed from hand controller base coordinate system to robot base coordinate systemv

Transformation matrix T for converting angular speed from hand controller base coordinate system to robot base coordinate systemω

And 3, obtaining weight distribution of control input by using a fuzzy control method:

synthesizing the two control quantities by linear weighted sum, and setting the weight of visual control as KαThe control amount of synthesis is:

υ=Kαυp+(1-Kαh

wherein KαAnd K in the previous stepτSolving using a fuzzy controller

S1=(Δp×Δθ)→Kα

The normalization parameter delta p of the pixel characteristic deviation of the n characteristic points and the normalization parameter delta theta of the included angle of the speed vector of the control quantity are calculated according to the following formula

S2=(Δp×Δθ)→Kτ

The amount of computation is blurred by Δ p from one input according to the blurring rule*And Δ θ*Respectively calculating:

Kα *=(Δp*×Δθ*)οS1

Kτ *=(Δp*×Δθ*)οS2

performing anti-fuzzy on the output to obtain K in the control weight meeting the requirementαAnd Kτ

And 4, step 4: and 3, synthesizing the weighted Cartesian space control quantity in the step 3, performing joint servo tracking by taking the weighted Cartesian space control quantity as the Cartesian space pose variation quantity of the expected tail end input by the robot, and controlling the tail end position and the pose of the robot to change so that the image servo system can assist the control operation of a human and the observation of a target object.

Joint servo control quantity

JrThe method is a Jacobian matrix between the Cartesian space position and speed change and the joint space angle change of the robot.

Said S1As shown in the following table:

VSΔp SΔp PSΔp PBΔp BΔp VBΔp
VSΔθ VBKα BKα BKα PBKα PBKα PSKα
SΔθ BKα BKα PBKα PBKα PBKα PSKα
PSΔθ BKα PBKα PBKα PSKα PSKα PSKα
PBΔθ PBKα PSKα PSKα PSKα PSKα SKα
BΔθ PBKα PSKα PSKα PSKα SKα SKα
VBΔθ PSKα PSKα PSKα SKα SKα VSKα

said S2As shown in the following table:

VSΔp SΔp PSΔp PBΔp BΔp VBΔp
VSΔθ VSKτ SKτ SKτ PSKτ PSKτ PSKτ
SΔθ SKτ SKτ PSKτ PSKτ PSKτ PBKτ
PSΔθ SKτ PSKτ PSKτ PSKτ PBKτ PBKτ
PBΔθ PSKτ PSKτ PBKτ PBKτ PBKτ BKτ
BΔθ PSKτ PBKτ PBKτ PBKτ BKτ BKτ
VBΔθ PBKτ PBKτ PBKτ BKτ BKτ VBKτ

advantageous effects

The invention provides a robot vision servo and man-machine interaction hybrid control method based on weight distribution. The key point of the invention is that a visual servo frame of the robot is constructed by utilizing the depth recovery of the back-light flow based on monocular vision, a man-machine interaction control frame is constructed by utilizing a force feedback guide mechanism, two evaluation parameters are set, a rule is established by utilizing the prior experience through a fuzzy control method, the evaluation parameters are used for decision making, a control weight distribution coefficient and a force feedback guide coefficient are obtained, and finally the fusion of two control modes can be realized.

The invention has the advantages that two control modes with different autonomy, namely the robot vision servo control and the human-computer interaction control, are fused through dynamic control weight distribution, so that the control input of a human can participate in a robot control loop, the diversity of system control input and the openness of a system control target are increased, and the invention is suitable for being applied to a vision observation task of an open scene.

Drawings

FIG. 1 is a block diagram of a control system used in the present invention

Detailed Description

The invention will now be further described with reference to the following examples and drawings:

as shown in fig. 1. The technical scheme adopted by the invention is that firstly, models of visual control and human-computer interaction control are respectively established, the control output of the models is calculated, then the fuzzy controller calculates weight distribution, and the fused control quantity is input into a robot joint servo control system to control the robot to move.

Step 1, modeling a visual control model, and calculating control quantity output by a visual controller;

step 2: modeling a human-computer interaction control model, and calculating control quantity output by a human-computer interaction controller;

and step 3: calculating input parameters of a fuzzy controller according to parameters of a visual controller and a human-computer interaction controller, distributing control weights by a fuzzy control method, and synthesizing weighted control quantity;

and 4, step 4: and 3, taking the weighted robot control quantity synthesized in the step 3 as the Cartesian space pose variation quantity of the expected tail end input by the robot, performing joint servo tracking, and controlling the tail end position and the pose of the robot to change, so that an image servo system can assist the control operation of the human and the observation of the target object.

The method comprises the following specific implementation steps:

step 1 is mainly to complete the design of a vision controller and obtain the control quantity of the terminal pose of the robot.

Definition end fixed cameraThe six-degree-of-freedom robot of (1) has a terminal coordinate system and a camera coordinate system superposed on each other, and the initial position and posture of the terminal with respect to the base coordinate system is ξ ═ x y z θx θy θz]TIn the present invention, a 2-3-1 rotation sequence is used, and the parameter of the camera model is set as [ f ρu ρv u0 v0]。

The invention is suitable for servo tracking of targets with three or more characteristic points, and the problem discussed in the invention is limited to the following description by taking the target as a rectangle and selecting four vertexes of the rectangle as the characteristic points. For each feature point, its initial pixel coordinate p ═ u v can be measured]TThe characteristic change rate of a single image and the change rate of the terminal pose have a linear relation

Namely, it is

Wherein upsilon isp=[vx vy vz ωx ωy ωz]T=[vp ωp]TDisplacement and rotation speed of the tip relative to the base coordinate system, JpNamely an image Jacobian matrix, according to a perspective projection equation and camera model parameters, the coordinate of the point in real space relative to the camera is assumed to be [ X Y Z ]]TThen, a calculation expression of the image jacobian matrix can be obtained:

for an arbitrary point, its desired pixel coordinate position p is assumedT=[uT vT]TDesigning the tracking error e of the feature point pixelp

ep=pT-p (3)

Designing a proportional controller with a proportionality coefficient of Kpp

up=Kppep (4)

Selecting three groups of point features in the four feature points, stacking the Jacobian matrix of the feature points, and solving the speed and angular velocity quantity of motion required by the end pose to enable the image features to reach the expected position in the pixel plane under the robot base coordinate system

Wherein JPiAnd upiRepresented as the image jacobian matrix and control inputs for the ith diagnostic point.

In the invention, the depth information of the feature points relative to the camera coordinate system is recovered by using a back-light flow method, namely, the Z value in the coordinate is solved. The depth estimator here is designed as follows: using upsilonr=[vr ωr]TRepresenting the real displacement and the rotation speed of the robot tail end relative to the base coordinate system, substituting into (2), and finishing to obtain

The method is characterized in that a least square method is used for carrying out online identification parameter estimation, under the condition of arbitrarily setting an initial value, the depth estimation value can quickly track an upper true value in the initial stage of the system, the method is insensitive to the error of a Z value in practical application, and the influence of the initial deviation on the performance of the system is small.

And 2, finishing the design of a human-computer interaction controller and calculating the control quantity of the human-computer interaction control on the terminal pose of the robot. The man-machine interaction controller adopts man-machine interaction equipment with corresponding position and speed sensors, such as a three-degree-of-freedom hand controller or a six-degree-of-freedom hand controller, to acquire control input. Taking a six-degree-of-freedom parallel hand controller as an example, assuming that the acquired input is a terminal point of the hand controller, the three-dimensional space position and the change speed under the base coordinate system of the hand controller are as follows:

e=[vhx vhz vhz ωhx ωhy ωhz]T

design proportional control law

Transformation matrix T for converting given speed and angular speed from hand controller base coordinate system to robot base coordinate systemvAnd TωBy calculating

The control quantity can be converted from the hand controller base coordinate space to the robot base coordinate space.

Meanwhile, three speed components of the visual servo control quantity are used as feedback, force feedback is designed in the hand controller for interaction, and an operator can feel the direction of the visual servo control. The interaction force is designed as

F=KτTv -1vp (9)

Wherein the weight KτThe calculation was performed according to the next step.

And 3, obtaining the weight distribution of the control input by using a fuzzy control method. The invention adopts linear weighting sum to synthesize two control quantities, and the weight of visual control is set as KαThe controlled amount of synthesis is

υ=Kαυp+(1-Kαh (10)

Wherein KαAnd K in the previous stepτThe solution is performed using a fuzzy controller.

The fuzzy controller is designed to be a double-input controller, and two fuzzy control inputs are respectively defined as: the normalization parameter delta p of the pixel characteristic deviation of the n characteristic points and the normalization parameter delta theta of the included angle of the speed vector of the control quantity are calculated according to the following formula

According to prior knowledge, fuzzification is carried out on two normalization parameter inputs and two output weights, fuzzy sets are respectively designed to be small (VS), small (S), small (PS), large (PB), large (B) and large (VB), and corresponding membership functions are determined. According to the principle that when the pixel deviation is large, the visual control weight value should be small, and the force feedback parameter should be large; and when the deviation of the included speed angle is large, the visual control weight value should be small, the force feedback parameter should be large, and two prior fuzzy criteria are established to establish a fuzzy relation S1And S2. Wherein

S1=(Δp×Δθ)→Kα (13)

S1The details are shown in the following table

VSΔp SΔp PSΔp PBΔp BΔp VBΔp
VSΔθ VBKα BKα BKα PBKα PBKα PSKα
SΔθ BKα BKα PBKα PBKα PBKα PSKα
PSΔθ BKα PBKα PBKα PSKα PSKα PSKα
PBΔθ PBKα PSKα PSKα PSKα PSKα SKα
BΔθ PBKα PSKα PSKα PSKα SKα SKα
VBΔθ PSKα PSKα PSKα SKα SKα VSKα

S2=(Δp×Δθ)→Kτ (14)

S2The details are shown in the following table

VSΔp SΔp PSΔp PBΔp BΔp VBΔp
VSΔθ VSKτ SKτ SKτ PSKτ PSKτ PSKτ
SΔθ SKτ SKτ PSKτ PSKτ PSKτ PBKτ
PSΔθ SKτ PSKτ PSKτ PSKτ PBKτ PBKτ
PBΔθ PSKτ PSKτ PBKτ PBKτ PBKτ BKτ
BΔθ PSKτ PBKτ PBKτ PBKτ BKτ BKτ
VBΔθ PBKτ PBKτ PBKτ BKτ BKτ VBKτ

The amount of computation is blurred by Δ p from one input according to the blurring rule*And Δ θ*Can be respectively calculated

Kα *=(Δp*×Δθ*)οS1 (15)

Kτ *=(Δp*×Δθ*)οS2 (16)

Performing anti-fuzzy on the output to obtain K in the control weight meeting the requirementαAnd Kτ

And 4, taking the weighted robot control quantity synthesized in the step 3 as the variation quantity of the Cartesian space pose of the expected tail end input by the robot, performing joint servo tracking, and controlling the tail end position and the posture of the robot to change, so that the image servo system can assist the control operation of the human and the observation of the target object.

Selecting a six-degree-of-freedom rotary robot model, setting a current joint angle q as [ q ] with known DH parameters1 q2 q3 q4 q5q6]According to the parameters and the rotation angle, a homogeneous transformation matrix A of any two adjacent joint coordinate systems can be obtainediAnd the robot total transformation matrix

A=A1A2A3A4A5A6 (17)

According to the mapping relation between the Cartesian space position and speed change and the joint space and angle change of the robot

JrThe joint angle control quantity can be solved for the Jacobian matrix between the Cartesian space position and speed change and the joint space angle change of the robot

Each joint angle is servo-controlled, and the controlled terminal position rotation matrix can be calculated according to the changed joint angle value

T=A(q) (20)

Thereby obtaining the controlled terminal position and attitude coordinates.

The key point of the invention is that a visual servo frame of the robot is constructed by utilizing the depth recovery of the back-light flow based on monocular vision, a man-machine interaction control frame is constructed by utilizing a force feedback guide mechanism, two evaluation parameters are set, a rule is established by utilizing the prior experience through a fuzzy control method, the evaluation parameters are used for decision making, a control weight distribution coefficient and a force feedback guide coefficient are obtained, and finally the fusion of two control modes can be realized.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种单关节锁定失效下的空间机械臂逆运动学方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!