Robot system and method of operation

文档序号:1102134 发布日期:2020-09-25 浏览:2次 中文

阅读说明:本技术 机器人系统和操作方法 (Robot system and method of operation ) 是由 N·布里凯-克雷斯特吉安 H·丁 R·基尔斯滕 B·马蒂亚斯 P·罗德里格斯 A·瓦尔布格 于 2019-02-08 设计创作,主要内容包括:一种机器人系统包括:机器人臂(1),其包括:多个连杆(5);和多个关节(6),所述连杆(5)中的一个通过关节(6)来连接到另一个所述连杆(5);装置(9,10),其用于针对所述关节(6)中的任一个而确定关节(6)的角速度和关节(6)所受到的转矩;神经网络(11),其连接到所述装置(9,10),以用于从所述装置(9,10)接收角速度数据和转矩数据,并且,神经网络(11)被训练成基于所述角速度数据和所述转矩数据而在机器人臂(1)的正常操作状况、机器人臂(1)与外部物体碰撞的状况以及人有意地与机器人臂(1)交互的状况之间区分。(A robot system includes: robot arm (1) comprising: a plurality of links (5); and a plurality of joints (6), one of said links (5) being connected to another of said links (5) by a joint (6); means (9,10) for determining, for any of said joints (6), the angular velocity of the joint (6) and the torque to which the joint (6) is subjected; a neural network (11) connected to the device (9,10) for receiving angular velocity data and torque data from the device (9,10), and the neural network (11) is trained to distinguish between normal operating conditions of the robot arm (1), conditions of collision of the robot arm (1) with external objects, and conditions of human intentional interaction with the robot arm (1) based on the angular velocity data and the torque data.)

1. A robotic system, comprising:

robot arm (1) comprising: a plurality of links (5); and a plurality of joints (6), one of said links (5) being connected to another of said links (5) by said joints (6); means (9,10) for determining, for any of the joints (6), the angular velocity of the joint (6) and the torque to which the joint (6) is subjected,

a neural network (11) connected to the device (9,10) for receiving angular velocity data and torque data from the device (9,10), and the neural network (11) is trained to distinguish between normal operating conditions of the robotic arm (1), conditions of collision of the robotic arm (1) with external objects, and conditions of human intentional interaction with the robotic arm (1) based on the angular velocity data and the torque data.

2. The robotic system according to claim 1, characterized in that the neural network (11) is designed to operate on a time series of pairs of angular velocity and torque data of a given joint (6), wherein when a new pair of data is present

Figure 37893DEST_PATH_IMAGE001

3. Robot system according to claim 1 or 2, characterized in that at least one neuron (18) of the neural network (11) is connected so as to receive angular velocity data of each of the joints (6)And torque data

Figure 557736DEST_PATH_IMAGE004

4. Robot system according to claim 1 or 2, characterized in that at least one neuron (18) of the neural network (11) is connected so as to receive angular velocity data and torque data of only an associated one of the joints (6)

Figure 998207DEST_PATH_IMAGE005

5. Robot system according to claim 1, 2 or 4, characterized in that the neural network (11) comprises a first hidden layer (17) divided into a plurality of groups (21), the neurons (17) of a given group (21) being connected so as to receive only angular velocity data and torque data of an associated one of the joints (6)

Figure 820669DEST_PATH_IMAGE006

6. Robot system according to claim 1, 2, 4 or 5, characterized in that the neural network (11) comprises groups (22) of neurons (18), each group (22) being associated with one of the joints (6) and being trained to distinguish between normal operation of the associated joint (6), a situation in which the joint (6) or an adjacent link (5) collides with an external object, and a situation in which a person intentionally interacts with the joint (6) or the adjacent link (5).

7. A robot system according to any of the preceding claims, characterized in further comprising a controller (2) for controlling the movement of the robot arm (1), the operation mode of the controller (2) being variable depending on the operation condition signal output by the neural network (11).

8. The robotic arm according to claim 7, wherein one of the operation modes is a guiding mode, and wherein the controller (2) enters the guiding mode if the neural network (11) finds that a person intentionally interacts with the robotic arm (1).

9. The robotic system according to claim 8, wherein one of the operation modes is a normal mode in which the movement of the robotic arm is defined by a predetermined program, and wherein the controller (2) enters the normal mode if the neural network (11) finds that an intentional interaction has ended.

10. A method of operating a robotic system, the method comprising the steps of:

a) providing a neural network (11), the neural network (11) having a plurality of inputs, each of the plurality of inputs being associated with a joint (6) of a robotic arm (1) of the system for receiving torque data and angular velocity data of the joint (6),

b) training the neural network (11) to distinguish between a normal operating condition of the robot arm (1), a condition in which the robot arm (1) collides with an external object, and a condition in which a person intentionally interacts with the robot arm (1) based on the angular velocity data and the torque data, based on a time series of torque data and angular velocity data representing normal operation, at least one collision, and a human intentionally interacting with the robot arm;

c) inputting real-time torque data and real-time angular velocity data of the robotic arm into the neural network (11) to determine an operating condition of the robotic arm (1) from the real-time torque data and the real-time angular velocity data, and selecting an operating mode of the robotic arm (1) based on the operating condition.

11. Robot system according to claim 1, characterized in that the neural network (11) is trained to determine contact points at which the robot arm (1) collides with external objects and/or a person intentionally interacts with the robot arm (1).

12. The method of claim 10, further comprising the steps of:

d) training the neural network (11) to determine a contact point at which the robot arm (1) collides with an external object and/or a human intentionally interacts with the robot arm (1).

Technical Field

The present invention relates to a robot system, and to an operating method for a robot system.

Background

Robots designed to cooperate directly with humans in a PFL (power and force limited) fashion achieve safe physical human-robot interaction. In such a case, contact between the human and the robot is likely to occur. These contacts may be of different kinds, i.e., they may be accidental, or they may be human-intended. In either case, in a robotic arm in which several links are interconnected by joints, contact will cause the angular velocity of at least one joint and/or the torque experienced by the joint to deviate from an expected value, which may have been measured in a previous iteration of the programmed movement of the robotic arm, or which may be calculated based on the known weight and leverage ratio (lever) of the robotic links and the current pose of the robotic arm.

Conventionally, any significant deviation from the expected value will cause the robot arm to stop. In this way, if the deviation is due to a collision with a person, harm to the person can be avoided. In a collaborative work environment, a person may intentionally touch a robot, for example, with the intent of guiding the robot around new obstacles not considered in the motion program currently being performed by the robot. In this case, stopping is not an appropriate reaction to the contact.

Therefore, it would be desirable if the robotic system were able to distinguish between accidental and intentional contact.

The approach to this problem may rely on low-pass filtering and high-pass filtering of the joint load torque based on the assumption that collisions contain more high frequency content and the expected interaction contains more low frequency content. However, the quality of classification is often inadequate because other quantities (e.g., joint velocity) and features (e.g., shape of the rising edge) must be considered for successful classification. Since many of these quantities depend on the physical parameters of a given robot, rules on how to consider these parameters must be developed separately for different robot models, requiring considerable investment in advanced technical workers.

Disclosure of Invention

It is therefore an object of the present invention to provide a robot system and an operating method for a robot system which provide a simple and economical way of distinguishing between accidental and intentional contact.

According to one aspect of the invention, the object is achieved by a robot system comprising:

a robotic arm, comprising: a plurality of links; and a plurality of joints, one of the links being connected to another link by a joint; means for determining the angular velocity of a joint and the torque to which the joint is subjected for any of said joints,

a neural network connected to the device for receiving angular velocity data and torque data from the device and trained to distinguish between a normal operating condition of the robotic arm, a condition of the robotic arm colliding with an external object, and a condition of the human intentionally interacting with the robotic arm based on the angular velocity data and the torque data.

In a preferred embodiment, the neural network is trained to determine a contact point at which the robotic arm collides with an external object and/or a human intentionally interacts with the robotic arm.

In a preferred embodiment, the neural network is trained to: distinguishing between a normal operating condition of the robotic arm, a condition of the robotic arm colliding with an external object, and a condition of a human intentionally interacting with the robotic arm based on the angular velocity data and the torque data; and determining a contact point at which the robotic arm collides with an external object and/or a human intentionally interacts with the robotic arm. Such a contact point may be positioned, for example, below or above the elbow.

The robot system according to the invention is configured to apply contact classification and/or contact localization and/or combined contact classification and localization.

The robotic system according to the invention is easy to scale up the number of sorted outputs. Thus, for example, instead of "normal operation"/"interaction"/"collision", the robotic system may be configured to apply classifications such as "normal operation"/"upper arm interaction"/"upper arm collision"/"lower arm interaction"/"lower arm collision".

If the neural network is trained on a sufficient amount of exemplary data representing a plurality of conditions to be distinguished, specific features of the plurality of conditions translate into weights for interconnections between neurons of the network without requiring a person to actually identify the features and develop rules. Thus, a reliable distinction between collision and intentional contact can be made in different robot systems at low cost, regardless of the physical characteristics of the robot systems.

To facilitate comparisons between data sets and identification of common or different characteristics of the data sets, the neural network must have a constant number of elements in the data received as input at each time step. However, as the movement of the robot arm continues, the amount of data continues to increase. Thus, the neural network is preferably designed to operate on a time series of angular velocity and torque data pairs for a given joint, with the oldest pair being deleted as new data pairs are determined and added to the time series in order to maintain the data set at a constant size.

The data thus obtained may be considered to be similar to a video image frame in which, for example, data obtained at various joints at a given point in time form rows of pixel data and columns are formed by successively obtained data for a given joint. In such a data set, the identification of features indicative of a collision or intentional contact may be compared to pattern recognition.

Theoretically, the best quality of distinction should be achieved if all neurons of the neural network, or at least a large number of neurons, receive data from each joint. Thus, in a basic embodiment of the invention, at least one neuron of a neural network is connected so as to receive angular velocity data and torque data of each of said joints.

However, the more connections a neuron has, the longer it takes to train the network, or rather the greater the amount of training data must be. Thus, it may be preferable not to allow all neurons to receive data from around the robot arm, but rather to associate neurons with specific joints of the robot. More precisely, at least one neuron of the neural network may be connected so as to receive angular velocity data and torque data of only an associated one of the joints.

In practice, each joint of the robotic arm may have associated with it one or more neurons that receive data from only that one joint.

In particular, the neural network may comprise a first hidden layer, the first hidden layer being divided into a plurality of groups, the neurons of a given group being connected so as to receive angular velocity data and torque data of only an associated one of the joints and being unconnected to neurons of other groups.

In this case, a second hidden layer may be provided, with neurons of the second hidden layer connected to various groups in the first hidden layer, so that the neurons of the second hidden layer may provide an overview of the entire robot arm, as the groups of neurons of the first hidden layer may identify features in the data from a given joint that may indicate accidental or intentional contact.

In a neural network where each group is associated with one of the joints, each group may be individually trained to distinguish between normal operation of the associated joint, a condition in which the joint collides with an external object, and a condition in which a person intentionally interacts with the joint. Since each group of neurons may focus on data from its associated joint, the identification of features indicative of a given type of contact tends to be easier for such groups than for a network as defined above (in which the neurons may receive data from any joint), so that good training results may be achieved using only a modest amount of training data.

The robot system may further include a controller for controlling movement of the robot arm, and an operation mode of the controller may be changed depending on the operation condition signal output by the neural network.

One of the operation modes of the controller may be, for example, a guidance (leadthrough) mode in which the controller detects a direction of a contact force applied to the robot by the person based on torque data and angular velocity data from the joints and controls the robot to move in the direction of the force. Such a switch would enable, for example, guiding the robot arm around an obstacle in the environment of the robot arm that is not or not yet perceived by the controller, and thus prevent a collision with the obstacle.

Another of the operating modes of the controller may be a normal mode in which the movement of the robotic arm is defined by a predetermined program. If the neural network finds that the intentional interaction has ended, the controller may enter the normal mode, thus enabling the robotic arm to immediately resume normal operation as soon as the person stops guiding the robotic arm.

The robot system according to the invention may also be configured to classify other types of contact situations. An example for such a contact situation would be for a robot performing an assembly task to continuously measure the position of the tool of the robot and the contact forces and moments that occur. Depending on the course of the position (course) and the forces and moments, it is then possible to classify whether the assembly task was successfully completed in the robot system according to the invention using a neural network. Training a neural network in a robotic system for such use cases is equivalent to training a neural network for differentiation and/or localization of human-robot contact, except that the input data and output of the network change; in the case described so far: inputting joint torque and speed, and outputting classification of contact; in the second described case: inputting cartesian positions and forces/moments, outputting a classification of success of the assembly task.

According to a second aspect of the invention, the above object is achieved by a method of operating a robotic system, the method comprising the steps of:

a) providing a neural network having a plurality of inputs, each of the plurality of inputs being associated with a joint of a robotic arm of the system for receiving torque data and angular velocity data for the joint,

b) training the neural network to distinguish between a normal operating condition of the robotic arm, a condition of the robotic arm colliding with an external object, and a condition of the human intentionally interacting with the robotic arm based on the angular velocity data and the torque data based on a time series of torque data and angular velocity data representative of normal operation, at least one collision, and the human intentionally interacting with the robotic arm;

c) real-time torque data and real-time angular velocity data of the robot arm are input into the neural network to determine an operating condition of the robot arm from the real-time torque data and the real-time angular velocity data, and an operating mode of the robot arm is selected based on the operating condition.

In a preferred embodiment, the method further comprises the steps of: the neural network is trained to determine contact points at which the robotic arm collides with an external object and/or a human intentionally interacts with the robotic arm.

In a preferred embodiment, the method further comprises the steps of: training the neural network to: distinguishing between a normal operating condition of the robotic arm, a condition of the robotic arm colliding with an external object, and a condition of a human intentionally interacting with the robotic arm based on the angular velocity data and the torque data; and determining a contact point at which the robotic arm collides with an external object and/or a human intentionally interacts with the robotic arm. Such a contact point may be positioned, for example, below or above the elbow.

The method according to the invention is configured to provide contact classification and/or contact localization and/or combined contact classification and localization.

The method according to the invention easily scales up the number of classification outputs. Thus, for example, instead of "normal operation"/"interaction"/"collision", the robotic system may be configured to apply classifications such as "normal operation"/"upper arm interaction"/"upper arm collision"/"lower arm interaction"/"lower arm collision".

The method according to the invention can also be configured to classify other types of contact situations. An example for such a contact situation would be for a robot performing an assembly task to continuously measure the position of the tool of the robot and the contact forces and moments that occur. Depending on the course of the position and the forces and moments, it is then possible to classify whether the assembly task was successfully completed using the method according to the invention using a neural network. The method is equivalent to the differentiation and/or localization of human-robot contacts, except that the input data and output of the network change; in the case described so far: inputting joint torque and speed, and outputting classification of contact; in the second described case: inputting cartesian positions and forces/moments, outputting a classification of success of the assembly task.

Drawings

Further features and advantages of the invention will become apparent from the subsequent description of embodiments of the invention with reference to the accompanying drawings.

FIG. 1 is a block diagram of a robotic system according to the present invention;

FIG. 2 illustrates data that the neural network of the robotic system uses as input;

FIG. 3 is a first embodiment of a neural network;

FIG. 4 is a second embodiment of a neural network; and

fig. 5 is a third embodiment of a neural network.

Detailed Description

Fig. 1 illustrates a robotic system comprising a robotic arm 1 and its associated controller 2. The robotic arm 1 includes a support 3, an end effector 4, any number of links 5, and a joint 6, the joint 6 connecting the links 5 to each other, to the support 3, or to the end effector 4, and having one or two rotational degrees of freedom. As is common in the art, the motors for driving the rotation of the link 5 and end effector 4 about the axes 7, 8 are hidden inside the link 5, joint 6 or support 3. The joint 6 further comprises: a rotary encoder or other suitable sensor 9 associated with each axis 7, 8 which provides data to the controller 2 regarding the orientation and angular velocity of each link 5; and a torque sensor 10 sensitive to torque in the direction of the axes 7 and 8, respectively. When the robotic arm 1 is moving freely without touching external objects, the torque detected by these sensors 10 is governed by the weight and geometry of the links 5, the internal friction of the links 5, and, when the angular velocity is not constant, by the moment of inertia of the links 5, so that the controller 2 can calculate the expected torque at each joint based on the known angle and rotational velocity of the links 5.

If the torque detected by the sensor 10 deviates significantly from such an expected value, it can be assumed that the robot arm 1 is in contact with some external object or person. The output data from the sensors 9,10 is received by the neural network 11, the neural network 11 in turn determining the mode of operation of the controller 2.

A FIFO memory device 12 is connected between the neural network 11 and the outputs of the sensors 9,10, so that not only the current torque data and angular velocity data are made available to the neural network 11, but also the M most recent data sets from the sensors 9,10, where M is any integer, are made available to the neural network 11.

The controller 2 has at least three operating modes, i.e. a normal operating mode in which the controller 2 controls the robot arm to move according to a predefined program, for example in order to grasp the screw 13 and introduce the screw 13 into the workpiece 14. Another operating mode is an emergency stop mode in which the robot arm 1 is immediately stopped, and a third operating mode is a guiding mode in which the robot arm 1 will move into a direction in which it is advanced by an external force (e.g. by a user's hand 15).

Fig. 2 is an "image frame" formed by angular velocity data and torque data provided by the sensors of the robotic arm 1. The image frame comprises "pixels" 16, each of the pixels 16 corresponding to one data from one sensor 9 or 10. These pixels 16 are organized in 2N rows and M columns, N being the number of degrees of freedom of the robotic arm 1 and the number of angular velocity sensors 9 and torque sensors 10 associated with each of these degrees of freedom, each column corresponding to a data set obtained from said 2N sensors 9,10 at a given point in time and stored in the FIFO storage device 13.

The task of the neural network 11 is to identify the features of the accidental contact and the intentional contact in the image frames formed by the data from the sensors 9,10, such that when the neural network 11 identifies an accidental contact, the neural network 11 will cause the controller 2 to switch to an emergency stop mode, and when the neural network 11 identifies an intentional contact, the controller 2 switches to a guide mode.

The neural network 11 is trained offline to recognize these features by: training data obtained from the robot arm in normal operation, with accidental contact and with intentional contact are input and the coupling coefficients between the neurons of the network are optimized in a manner known per se by back propagation until the reliability of these different conditions as identified by the network 11 is satisfactory.

The neural network 11 may have the following structure shown in fig. 3: in the hidden layer 17, there are P neurons 18, each of the neurons 18 receiving torque data and angular velocity data from the M most recent data sets stored in the storage device 13, i.e. each of the neurons 18 is able to "see" each pixel in the frame of fig. 2. The output layer 19 comprises at least three neurons 20, one neuron 20 for each of the operating modes of the controller 2 from which the network 11 has to select. Each of the neurons 20 receives input from all of the neurons 18 of the hidden layer 17. The neurons 18 in the hidden layer 17 work with a standard Sigmoid activation function, while the neurons 20 in the output layer 19 use a Softmax activation function. Only one of the neurons 20 may be active at a given time. When such a neuron 20 becomes active, the neuron 20 causes the controller 2 to switch to its associated mode of operation.

Since each neuron 18 receivesInput data, and thus the weighting coefficient vector for each neuron element 18 must haveComponents such that the training process involves optimizationA weighting factor. In order to prevent the neural network 11 from memorizing only its training data and the desired recognition result associated with the training data, a large amount of training data is required, and the amount of computation required for satisfactory training increases much faster than linearly proportional to the number of degrees of freedom N of the robotic arm 1.

According to the preferred embodiment shown in fig. 4, the neural network 11 has the following structure reflecting the structure of the robotic arm 1: the neurons 18 of the hidden layer 17 are divided into N groups 21, each of which is associated with one degree of freedom of the robotic arm 1 or with a corresponding joint 5 of the robotic arm 1. The neurons of one group 21 receive data only from the angular velocity sensor 9 and the torque sensor 10 of the one joint 5. This reduces the number of weighting coefficients in each neuron 18 by a factor of N. Since each group 21 supervises only one joint 5, the number of neurons required in one group 21 will be much less than the P neurons of fig. 3; that is, since the complexity of the task that the neural network 11 will solve is the same in both cases, the total number of hidden layer neurons 18 may be the same in both embodiments. Thus, the amount of computation required for training is reduced, not only because of the smaller number of weight vector components, but also because the amount of training data required to prevent the network 11 from remembering is smaller than in the case of fig. 3.

The structure of the network 11 shown in fig. 5 is modeled on the following facts: the effect of contact (intentional or accidental) on a given joint 5 depends on where in the robot arm 1 the contact occurs. Any contact will generally have the most significant effect on the joint immediately adjacent to the connecting rod where the contact occurs. Thus, in a further preferred embodiment, the training data is not only marked as corresponding to normal operation, accidental contact or intentional contact, but in the latter two cases the training data also specifies the link where the contact occurred. It is clear that for the neurons 18 in the hidden layer 17 associated with a given joint, it is much easier to identify a situation where a link in close proximity to the joint makes contact than a link in far away. Thus, in the embodiment of fig. 5, the network 11 is divided into N sub-networks 22, each of which is associated with one joint or degree of freedom of the robotic arm 1. Each subnetwork 22 comprises a hidden layer 23, the neurons 18 of which hidden layer 23 receive data only from the angular velocity sensor 9 and the torque sensor 10 of said joint, just as the neurons of the group 21 of fig. 4. However, in contrast to these groups 21, the sub-networks 22 do not have a common output layer, but each sub-network has its own output layer 24.

Based on training data indicative of links that are in contact, these output layers 24 may be trained to distinguish between normal operation and conditions in which contact (intentional or accidental) occurs in links adjacent to the associated joints of the output layers 24. Since a contact will generally have the most significant effect in a nearby joint, it is also easier for the sub-network 22 associated with that joint to learn the characteristic features of that joint than if a distant contact had to be considered. Thus, a good quality of discrimination can be achieved here based on a small amount of training data.

The number Q of neurons in the output layer of the subnetwork 22 may be equal to the number of operation modes of the controller 2. The outputs of these output layers 24 can then be used directly to control the mode of operation of the controller 2. In addition, a global output layer 25 may be provided, with the robotic arm as a whole, the neurons of the global output layer 25 indicating whether the robotic arm is operating normally, accidentally, or intentionally. If Q is the number of modes of operation, the neurons of the global output layer 25 may be implemented as simple logic gates, such as: a NOR gate for normal operation, which will output true when none of the sub-networks 22 indicates a touch condition; or an or gate for accidental and intentional contact, respectively, that outputs true whenever one of subnetworks 22 detects an accidental and intentional contact at its associated link.

In practice, the number Q of neurons in the output layer 24 of a sub-network 22 should be slightly higher than the number of operation modes in order to enable each sub-network 22 to provide more information to the global output layer 25, for example, not only information whether contact has occurred or not at the immediate link, but also information whether there is suspicion that contact may have occurred at a distant link, thereby enabling the global output layer 25 to draw a more reasonable conclusion.

Reference numerals

1 robot arm

2 controller

3 support part

4 end effector

5 connecting rod

6 joint

7 axis

8 axes of rotation

9 angular velocity sensor

10 torque sensor

11 neural network

12 FIFO memory device

13 screw

14 workpiece

15 hand

16 pixels

17 hidden layer

18 neuron

19 output layer

20 nerve cell

21 group

22 sub-network

23 hidden layer

24 output layer

25 global output layer.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于确定机器人装置的优化移动序列的方法和设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!