Robot and control system capable of reducing misoperation caused by time difference of network

文档序号:197141 发布日期:2021-11-02 浏览:36次 中文

阅读说明:本技术 可以减少由于网络的时间差引起的误操作的机器人与控制系统 (Robot and control system capable of reducing misoperation caused by time difference of network ) 是由 佐佐木史纮 于 2020-02-28 设计创作,主要内容包括:机器人被配置为根据操作者经由网络做出的操作指示而运动。该机器人包括运动图像采集单元,接收单元,运动目的地预测单元和自主控制单元。运动图像采集单元被配置为捕获机器人周围的环境作为运动图像。接收单元被配置为接收操作指示。运动目的地预测单元被配置为基于由接收单元接收的操作指示来预测机器人的运动目的地。自主控制单元被配置为基于在接收到操作指示的时刻从运动图像获得的环境的信息,自主地校正根据操作指示到运动目的地的运动。(The robot is configured to move in accordance with an operation instruction made by an operator via the network. The robot includes a moving image acquisition unit, a reception unit, a movement destination prediction unit, and an autonomous control unit. The moving image acquisition unit is configured to capture an environment around the robot as a moving image. The receiving unit is configured to receive an operation instruction. The movement destination prediction unit is configured to predict a movement destination of the robot based on the operation instruction received by the reception unit. The autonomous control unit is configured to autonomously correct a movement to a movement destination according to the operation instruction based on information of an environment obtained from the moving image at a timing when the operation instruction is received.)

1. A robot configured to move in accordance with an operation instruction made by an operator via a network, comprising:

a moving image capturing unit configured to capture an environment around the robot as a moving image;

a receiving unit configured to receive the operation instruction;

a movement destination prediction unit configured to predict a movement destination of the robot based on the operation instruction received by the reception unit; and

an autonomous control unit configured to autonomously correct a movement to the movement destination according to the operation instruction based on information of an environment obtained from the moving image at a timing when the operation instruction is received.

2. The robot of claim 1,

the moving destination may be selected from a plurality of candidates given in advance.

3. The robot according to claim 1 or 2, characterized by comprising a delay measuring unit configured to measure a time difference between a time at which the moving image transmitted to the operator is captured and a time at which the operation instruction is received, wherein,

the autonomous control unit is configured to change the amount of correction according to the amount of the time difference.

4. Robot according to any of the claims 1 to 3,

the autonomous control unit is configured to perform machine learning based on a data set including the moving image, the operation instruction, and the movement destination collected in advance as a group.

5. Robot according to any of the claims 1 to 4,

the autonomous control unit is configured to continue the movement if the operation instruction is not received within a predetermined amount of time during the movement to the movement destination.

6. Robot according to claim 5,

the robot is configured to stop if the operation indication is not received within the predetermined amount of time or more.

7. Robot according to one of the claims 1 to 6,

the movement destination prediction unit is configured to distinguish a path area in which the robot can move and an external area in which the robot cannot move in the moving image.

8. A control system, comprising:

the robot of any of claims 1 to 7, and

a display unit configured to display the moving image to the operator, wherein

The moving image is displayed together with the predicted route of the motion corrected by the autonomous control unit.

Technical Field

The invention relates to a robot and a control system.

Background

Robots that are remotely operated via a network, referred to as telepresence robots, are known in the art. In such a robot, occurrence of delay associated with transmission and reception of data is inevitable due to intervention of a network. When the operator performs remote control, a real-time captured moving image captured by the robot may be different from a moving image observed by the operator as a prompt to give a motion instruction due to a delay, thereby causing an erroneous operation.

Robots that operate automatically based on autonomous decisions made by the robot are also known in the art.

However, such an autonomous robot does not have too many degrees of freedom to accept an operation by an operator, thereby impairing the characteristics of a robot existing remotely, i.e., "perform an action in response to an operator operating the robot at his or her own will to reach any remote location that the operator wants to go".

Disclosure of Invention

Technical problem

The present invention has been made in view of the above problems, and it is an object of the present invention to provide a robot capable of reducing an erroneous operation due to a time difference of a network while maintaining its operability.

Solution to the problem

According to an aspect of the present invention, the robot is configured to move according to an operation instruction made by an operator via the network. The robot includes a moving image acquisition unit, a reception unit, a movement destination prediction unit, and an autonomous control unit. The moving image acquisition unit is configured to capture an environment around the robot as a moving image. The receiving unit is configured to receive an operation instruction. The movement destination prediction unit is configured to predict a movement destination of the robot based on the operation instruction received by the reception unit. The autonomous control unit is configured to autonomously correct a movement to a movement destination according to the operation instruction based on information of an environment obtained from the moving image at a timing when the operation instruction is received.

Advantageous effects of the invention

According to an aspect of the present invention, it is possible to reduce malfunction due to a time difference of a network while maintaining operability.

Drawings

Fig. 1 is a schematic diagram showing a general configuration of a control system in a remote-presence robot according to an embodiment of the present invention.

Fig. 2 is a schematic block diagram showing a configuration of the motion indicating unit shown in fig. 1.

Fig. 3 is a block diagram showing an example of the configuration of the control system shown in fig. 1.

Fig. 4 is a schematic diagram showing an example of a delay relationship among a network delay, a moving image acquired by a robot, and an operation instruction.

Fig. 5 is a diagram showing an example of a case where autonomous control according to the present embodiment is added to the delay relationship shown in fig. 4.

Fig. 6 is a schematic diagram showing a first example of the operation of the autonomous control unit according to the present invention.

Fig. 7 is a schematic view showing an example of a control operation of the control system according to the present invention.

Fig. 8 is a schematic diagram showing a second example of the operation of the autonomous control unit according to the present invention.

Fig. 9 is a schematic diagram showing a third example of the operation of the autonomous control unit according to the present invention.

Fig. 10 is a schematic diagram showing a fourth example of the operation of the autonomous control unit according to the present invention.

Detailed Description

As an example of an embodiment of the present invention, fig. 1 shows a conceptual diagram of a general configuration of a control system 100 in a remotely-present robot TR that is a remote-controlled robot operated by an operator P using an operation unit 10 via a network 9.

The remote robot TR includes: a camera 20 which is a moving image capturing unit; a moving unit 21 configured with wheels or endless belts so as to be movable; a receiving unit 22 for receiving an operation instruction from the operation unit 10; and a control unit 30 for controlling each component of the robot TR existing remotely according to the operation instruction received by the receiving unit 22.

The operation unit 10 includes: a moving image display unit 11 for displaying an image or video viewed by the operator P to perform an operation; and a movement indication unit 12 including a plurality of buttons to indicate the movement direction as shown in fig. 2.

The operation unit 10 transmits and receives data to and from the remote-existing robot TR by communication via the wireless or wired network 9.

Specifically, a moving image captured by the remote-present robot TR with the camera 20 is transmitted to the moving image display unit 11, and an operation instruction made by the operator P while viewing the moving image display unit 11 using the motion instruction unit 12 is transmitted to the receiving unit 22 of the remote-present robot TR.

The movement indication unit 12 has a function as a movement direction indication unit including, for example, four buttons, i.e., a forward movement indication button 12a, a right turn indication button 12b, a backward movement indication button 12c, and a left turn indication button 12 d.

Although the receiving unit 22 is shown separately from the control unit 30 in the present embodiment, the receiving unit 22 may be provided as a single function in the control unit 30, and the present invention is not limited to such a configuration.

As shown in fig. 3, the control unit 30 includes: a movement destination prediction unit 31 for predicting a movement destination of the robot TR existing remotely based on the operation instruction received by the reception unit 22; and an autonomous control unit 32 for autonomously correcting the movement to the movement destination according to the operation instruction based on the information on the surrounding environment.

The control unit 30 further includes a delay measurement unit 33 for measuring a delay time td, which is an amount of time between a time when the camera 20 acquires the moving image and a time when an operation instruction made by the operator P based on the moving image is received at the reception unit 22, which will be described later.

Based on the operation instruction received from the receiving unit 22, the movement destination prediction unit 31 determines in which direction to move (forward, backward, leftward or rightward), and predicts the assumed movement destination Q as will be described later.

The autonomous control unit 32 controls the movement of the movement unit 21 to the movement destination Q based on the movement destination Q predicted by the movement destination prediction unit 31 and the information on the surrounding environment obtained by the camera 20.

The delay measuring unit 33 measures the delay time, for example, by: the time-stamped moving image is transmitted when a moving image frame F1, which will be described later, is transmitted to the operation unit 10, and the time difference between the time when the transmission is performed and the time when the time-stamped moving image is received by the operation unit 10 is measured. Note that this way of measuring the delay time is given as an example only. For example, data may be sent and received to and from the network 9, and the delay time may be measured in terms of the amount of time required to send and receive the data. Alternatively, any known method capable of measuring delay on a network may be employed.

When such a control system 100 is used, the amount of time required for transmission and reception over the network 9 causes a problem in that a delay corresponding to the moving image transmission and reception time t1 occurs between the cloud image frame acquired by the camera 20 and the moving image frame displayed on the moving image display unit 11 after the delay by transmission and reception, as shown in fig. 4. Further, the time at which the receiving unit 22 of the robot TR existing remotely receives the operation instruction issued by the operator P is delayed by a delay time td corresponding to the sum of the moving image transmitting and receiving time t1, the operation determination time t2, and the operation instruction transmitting and receiving time t3, the operation determination time t2 is the time between the time at which the operator P views the moving image and the time at which the operator P actually issues the operation instruction using the motion instructing unit 12, and the operation instruction transmitting and receiving time t3 is the amount of time it takes for the operation instruction to reach the receiving unit 22 from the operation unit 10.

That is, when the operator P remotely operates the robot TR existing remotely, a delay greater than or equal to the delay time td inevitably occurs until the operation instruction is actually received and the motion is started after the moving image frame captured by the camera 20, as shown in fig. 4. It is well known that when using a typical current network 9, a delay of approximately 0.3 to 0.4 seconds td occurs.

In other words, when the operator P transmits an operation instruction while viewing the frame F1 displayed on the moving image display unit 11 and then the remote-present robot TR actually operates, the situation may be generally different from the frame F13 representing the actual surroundings acquired by the camera 20. Therefore, an operation instruction unsuitable for the environment around the remotely-present robot TR in the frame F13 acquired by the camera 20 may be issued.

In remotely operating the robot TR existing remotely, it is impossible to make such a delay time td zero. Further, such a delay time td may be a cause of an unexpected accident or failure in operating the robot TR existing remotely, such as exceeding a path to be taken, or colliding with a person or object when the person or object suddenly appears on a route as an obstacle.

To solve such a problem, a method that allows the robot itself to determine a route without operating the robot can be conceived. However, in the autonomous controlled movement by simple programming, the operator P cannot operate the robot as needed. Therefore, a problem arises in that it is difficult to correctly perform tasks such as remotely patrolling a desired site.

In order to solve such a problem, the present invention includes an autonomous control unit 32 for autonomously correcting a motion to a motion destination Q in accordance with an operation instruction based on a moving image acquired by the camera 20.

The control unit 30 further includes a delay measurement unit 33 for measuring a delay time td, which is an amount of time between a time when the camera 20 acquires the moving image and a time when an operation instruction made by the operator P based on the moving image is received at the reception unit 22, which will be described later.

The operation of such an autonomous control unit 32 will be described in detail below with reference to fig. 5 to 7.

First, it is assumed that a delay equal to the delay time td shown in fig. 4 occurs before the operator P views the moving image on the moving image display unit 11 and actually issues an operation instruction.

Based on the moving image frame of the camera 20 at the timing at which the receiving unit 22 receives the operation instruction a (in particular, the frame F13 in fig. 5), the autonomous control unit 32 instructs the moving unit 21 of the autonomous operation 13A obtained by correcting the operation instruction a based on the frame F13. Further, when the operation instructions B to R are received thereafter, the autonomous operations 14B to 25M are sequentially performed based on the moving image frames (F14 to F25) of the camera 20 at the time when the operation instructions B to R are received as shown in fig. 5, for example.

As compared with performing the movement by directly following the movement control a performed based on only the moving image frame F1, high-precision control in consideration of the surrounding environment obtained by the latest moving image frame F13 can be achieved by performing the autonomous operation 13A obtained by correcting the operation instruction a that has been issued based on the moving image frame F1 based on the moving image frame F13. Further, compared to simply performing the motion control a, it is possible to perform the control in consideration of the delay time because the correction is performed using the motion image frame obtained after the elapse of the delay time td. Note that the correction to be performed in the movement to the movement destination Q may vary according to, for example, the delay time td. Alternatively, if the delay time td can be regarded as a sufficiently small amount, such a correction value may be set to 0, thereby allowing the instruction made by the operator P to be directly performed by the remotely-present robot TR.

Specifically, an operation in the case where the operator P presses the forward movement instruction button 12a of the movement instruction unit 12 (instructs forward movement) when the moving image frame F1 as shown in fig. 6(a) is acquired will be described.

When the camera 20 acquires the moving image frame F1, the remote-present robot TR transmits the moving image frame F1 to the moving image display unit 11 (step S101).

Based on the received moving image frame F1, the operator P presses the forward movement instruction button 12a of the movement instruction unit 12 so as to perform an operation to move forward directly (step S102).

The receiving unit 22 receives such pressing of the forward movement instruction button 12a as an operation instruction (step S103).

When such an operation instruction is received at the receiving unit 22, the movement destination prediction unit 31 predicts the direction in which the operator P wants to move the remote-present robot TR, from the button pressed in the movement instruction unit 12 (step S104). With respect to the moving image frame F13 acquired by the camera 20 when the operation instruction is received, as shown in fig. 6(b), the moving destination prediction unit 31 predicts the moving destination Q in the direction corresponding to the front in the direction along the path 41 based on the travelable path 41 and the out-of-path area 42 set or distinguished in advance by, for example, image recognition (step S105), the out-of-path area 42 being a non-travelable outside area.

At this time, although the forward movement instruction button 12a is pressed, the path 41 is actually gradually curved in the left front direction as shown in fig. 6 (b). Therefore, the autonomous control unit 32 predicts that the movement to the "front" direction, which has been instructed by the forward movement instruction button 12a, is actually the movement to the movement destination Q in the "oblique left front direction" based on the moving image frame F13.

Further, once the movement destination Q is predicted, the autonomous control unit 32 controls the movement unit 21 via the autonomous operation 13A according to the curvature of the path 41 and moves it to such movement destination Q (step S106). In this way, the autonomous control unit 32 performs a movement to the movement destination Q by performing correction based on the moving image frame F13 acquired when the receiving unit 22 receives the operation instruction.

In the autonomous operation 13A of the autonomous control unit 32 at this time, if the remote-present robot TR receives the "no" movement instruction (no button is pressed, or no new operation instruction is issued), the remote-present robot TR wishes to continue the movement to the predicted movement destination Q (step S107).

If no new operation instruction is issued, it is further determined whether the state in which no operation instruction is issued continues for a predetermined amount of time or more (step S108).

In the present embodiment, if no operation instruction is received within a predetermined amount of time in the process of moving to the movement destination Q, the autonomous control unit 32 continues moving to the movement destination Q.

In this way, even in the case where no movement instruction is received, the remotely-present robot TR continues to move toward the movement destination Q obtained before the predetermined amount of time. By continuing such movement even when an unintentional interruption of the movement instruction occurs (e.g., a momentary interruption of the network 9), the operation can be continued without any failure when the network 9 is restored.

If the operation instruction is stopped for a predetermined amount of time or more (yes in step S108), the movement of the remote-present robot TR may be stopped (step S109).

By controlling the remotely-present robot TR to stop when no operation instruction is received for a predetermined amount of time or more as just described, it is possible to prevent an accident such as a collision from occurring when a severe disconnection of the network 9 occurs instead of a momentary interruption of the network 9.

Further, in order to allow the operator P to accurately recognize the moving direction when performing a movement according to the autonomous operation 13A in step S106, when the moving image frame F13 is transmitted to the moving image display unit 11, the movement destination Q and the predicted route to be tracked by the autonomous operation 13A may be displayed as indicated by the dotted line at fig. 6 (b).

Displaying the predicted moving direction in this manner allows the operator P to perform an operation while checking whether the moving operation he or she makes by himself or herself is correctly understood by the robot TR existing remotely, thereby further contributing to improving the accuracy of the operation instruction.

In the present embodiment, the movement destination Q is predicted simply based on the path 41. However, a method of selecting any one of a plurality of destination candidates described in the map information and defined in advance by connecting the indoor/outdoor position measurement information to the map information in the remote place may be used as the method of predicting the moving destination Q, for example. Alternatively, learning data including "captured moving image" as shown in fig. 6(a) or (b), "moving instruction" pressed and "position (destination)" to which the operator P actually wants to move may be collected in advance in the remote robot TR as a set of learning data, and the autonomous control unit 32 may be caused to learn such data by machine learning.

According to this machine learning method, in response to the input of the "motion instruction" and the environmental information obtained from the "moving image", the autonomous control unit 32 can output the operation of the motion unit 21 that is considered to be optimal from among various patterns input in the learning data. In addition to the preparation of the learning data, the environment learning can be performed by actually repeatedly operating the remote robot TR using the moving image data and the operation instruction acquired by the camera 20 as the environment information.

If the receiving unit 22 receives a different operation instruction from the operation unit 10 in step S107, the movement destination prediction unit 31 and the autonomous control unit 32 repeatedly perform operations from step S101 to step S108 according to such an operation instruction. Thus, the remote-existing robot TR continues to be operated.

If no new operation instruction is received within a predetermined time or more, the remote robot TR stops, as shown in step S109.

Examples of different autonomous operations of such an autonomous control unit 32 are comprehensively described in fig. 8 to 10, in which schematic diagrams of moving images and operation instructions are shown. Note that the autonomous operation 13A in the present embodiment is not limited to such operations, and these operations are given as examples only.

Fig. 8 shows a schematic diagram of a case where the person 43 is on the route 41 as an obstacle to be avoided.

In the moving image frame F13 shown in fig. 8, if the operator P presses the forward movement instruction button 12a, the moving object prediction unit 31 predicts that the area in front of the person 43 in the moving image frame F13 is the movement destination Q. Therefore, even if the forward movement instruction button 12a is continuously pressed, the remote robot TR will stop in front of the person 43.

As described above, by identifying the person 43 based on the moving image frame F13, if the person 43 is present in the moving direction of the remotely-present robot TR, the autonomous control unit 32 stops the movement and performs the autonomous operation to avoid the collision with the person 43, the moving image frame F13 indicating the moving image at the time of receiving the operation instruction. Such an autonomous operation may also be applied to a case where a stop determination is made when any obstacle other than the person 43 is present on the path 41.

Fig. 9 shows a schematic diagram of a case where the moving direction is unclear due to the pressing of the forward movement instruction button 12 a. In the moving image frame F13 shown in fig. 9, the path 41 is interrupted in the forward direction, and the direction of travel to make a left or right turn at the end is unclear. In such a moving image frame F13, the moving destination prediction unit 31 predicts that the area around the top edge of the path 41 is the moving destination Q, and the autonomous control unit 32 instructs the moving unit 21 to move to the moving destination Q and then stops.

As just described, when a more preferable movement manner is unclear, for example, a left turn or a right turn at a T-shaped intersection is unclear, the autonomous control unit 32 moves the remotely-present robot TR to the movement destination Q, and then stops. In this case, the autonomous control unit 32 may be configured to: when the moving image frame F13 is transmitted to the operator P, a message indicating an instruction that a left turn or a right turn is required is displayed, reception of "a moving instruction made by the operator P" is waited, and when the receiving unit 22 receives the moving instruction B, control of the left turn or the right turn is started.

Further, when the operation instruction B is issued, the autonomous control unit 32 transfers the autonomous operation 26B to the moving unit 21, the autonomous operation 26B being obtained by referring to the moving image frame F26 acquired by the camera 20 at the time when the operation instruction B is received at the receiving unit 22.

Fig. 10 shows a schematic diagram of an example of autonomous motion control when the remote-present robot TR overrides the position 44 where a right turn can be made.

Fig. 10(a) shows an image of the moving image frame F1. A case will be discussed in which the operator P detects the position 44 where a right turn can be made while viewing such a moving image frame F1 displayed on the moving image display unit 11, and presses the right turn instruction button 12b of the movement instruction unit 12 after the remotely-present robot TR has surpassed the position 44.

As described above, in the moving image frame F13 when the reception unit 22 receives the right turn instruction, the robot TR existing remotely has exceeded the position 44 where the right turn can be made as shown in fig. 10(b) due to the delay time td of the network 9.

For the moving image frame F1 shown in fig. 10(a), the autonomous control unit 32 recognizes and stores in advance a position on the path 41 where a right turn can be made.

When a right turn instruction is received at the receiving unit 22, the movement destination prediction unit 31 sets an assumed movement destination Q in the right direction as the movement destination Q.

If the autonomous control unit 32 receives the right turn instruction within the delay time td after the autonomous control unit 32 stores the existence of the right turn capable position 44, the autonomous control unit 32 determines that the right turn instruction indicates that the right turn is to be made to the right turn capable position 44, and corrects the movement destination Q so as to be disposed at the right turn capable position 44.

More specifically, based on two pieces of environmental information known from the moving image frame F13 shown in fig. 10(b), that is, the presence of a right turn-able place and the fact of passing the right turn-able place, the autonomous control unit 32 corrects the moving destination Q to set the right turn-able place 44 on the path 41, and then selects the autonomous operation 13A to move the remote presence robot TR backward by a distance corresponding to the forward passing, as shown in fig. 10 (c). The autonomous control unit 32 also executes the autonomous control 26N to make a right turn as instructed by the right turn after completion of the backward movement.

In this way, the autonomous control unit 32 autonomously corrects the movement to the movement destination Q according to the operation instruction based on the environment information obtained from the moving image frame F13.

By this configuration, it is possible to reduce the malfunction due to the time difference of the network while maintaining the operability of the operator P itself.

Although preferred embodiments of the present invention have been described above, the present invention is not limited to such specific embodiments. Unless otherwise indicated in the foregoing description, various changes and modifications may be made without departing from the spirit of the invention as set forth in the claims.

In the present embodiment, for example, a robot including the moving unit 21 driven by wheels or an endless belt has been described as a remote robot TR. However, the present invention is applicable to robots having other drive mechanisms.

The present embodiment has described the case in which control by means of machine learning is performed as a method for correcting the movement destination Q by the autonomous control unit 32. However, any method may be employed as long as the method makes a correction in consideration of the surrounding environment when the remotely-present robot TR receives the motion indication at the receiving unit 22, and for example, it may be employed to simply add a predetermined correction value for the delay time td. Although the movement-destination prediction unit 31 and the autonomous control unit 32 in the present embodiment have been described as components each having a separate function of the control unit, the present invention is not limited to such a configuration. Alternatively, the autonomous control unit may have a function of predicting a movement destination.

The advantageous effects described in the embodiments of the present invention are merely an exemplification of the most preferable advantageous effects obtained from the present invention. The advantageous effects of the present invention are not limited to those described in the embodiments of the present invention.

REFERENCE SIGNS LIST

10 operating unit

11 moving image display unit

12 motion indicating unit

20 moving image acquisition unit (Camera)

21 movement unit

22 receiving unit

30 control unit

31 moving destination prediction unit

32 autonomous control unit

100 control system

TR robot (remote robot)

P operator

Destination of Q movement

Reference list

Patent document

PTL1 japanese laid-open patent publication No. 2017-102705A

PTL2 japanese patent No. 5503052

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:具有线路引导装置的机器人臂

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!