Robot grabbing method and system based on three-dimensional data analysis

文档序号:1869451 发布日期:2021-11-23 浏览:34次 中文

阅读说明:本技术 一种基于三维数据分析的机器人抓取方法及系统 (Robot grabbing method and system based on three-dimensional data analysis ) 是由 李瑞锋 张考 许添平 于 2021-08-30 设计创作,主要内容包括:本发明提出了一种基于三维数据分析的机器人抓取方法及系统,方法包括:构建识别网络模型和坐标转换模型并置入智能抓取机器人核心处理器中,并对识别网络模型进行性能训练;根据路径规划达到目标物所在区域;触发信息采集设备对目标物进行图像数据采集;对采集到的图像数据进行数据预处理并获取目标物的三维数据信息;将步骤四获取到的三维数据信息输入所述识别网络模型中进行目标物相关信息的提取;利用坐标转换模型对步骤五中提取到的位置信息进行转换;根据转换后的坐标生成抓取指令,并根据所述抓取指令完成抓取。通过获取物体的三维信息,以及识别网络模型对三维信息的读取分析,使得智能化抓取机器人可以高效率地控制完成抓取的任务。(The invention provides a robot grabbing method and system based on three-dimensional data analysis, wherein the method comprises the following steps: constructing a recognition network model and a coordinate conversion model, placing the recognition network model and the coordinate conversion model into a core processor of the intelligent grabbing robot, and performing performance training on the recognition network model; planning to reach the area where the target object is located according to the path; triggering information acquisition equipment to acquire image data of a target object; carrying out data preprocessing on the acquired image data and acquiring three-dimensional data information of a target object; inputting the three-dimensional data information acquired in the step four into the identification network model to extract the relevant information of the target object; converting the position information extracted in the fifth step by using a coordinate conversion model; and generating a grabbing instruction according to the converted coordinates, and completing grabbing according to the grabbing instruction. By acquiring the three-dimensional information of the object and recognizing the reading and analysis of the network model on the three-dimensional information, the intelligent grabbing robot can efficiently control and complete the grabbing task.)

1. A robot grabbing method based on three-dimensional data analysis is characterized by comprising the following steps:

step one, constructing a recognition network model and a coordinate conversion model, placing the recognition network model and the coordinate conversion model into a core processor of the intelligent grabbing robot, and performing performance training on the recognition network model;

step two, planning to reach the area where the target object is located according to the path;

triggering information acquisition equipment to acquire image data of the target object;

step four, carrying out data preprocessing on the acquired image data and acquiring three-dimensional data information of the target object;

inputting the three-dimensional data information acquired in the step four into the identification network model to extract the relevant information of the target object; the related information comprises position information and category information;

step six, converting the position information extracted in the step five by using a coordinate conversion model;

and seventhly, generating a grabbing instruction according to the converted coordinates, and completing grabbing according to the grabbing instruction.

2. The robot grabbing method based on three-dimensional data analysis of claim 1,

when data preprocessing is carried out on the collected image data, a weighted average value method is adopted to carry out gray level image processing on the color image, and after gray level is carried out, the gray level image data is converted into black-and-white binary image data by adopting a numerical value of 0 or 255;

wherein the expression of graying is:

in the formula (I), the compound is shown in the specification,representing the image data after the drawing process,representing the weight of the R component in the color image;representing the weight of the G component in the color image;the weight of the B component in the color image.

3. The robot grabbing method based on three-dimensional data analysis of claim 1,

step six, when the position information is converted by using the coordinate conversion model, the information acquisition equipment exists at the tail end of the intelligent robot mechanical arm and is fixed relative to the ground;

establishing a mechanical arm base coordinate system, a mechanical arm tail end coordinate system and a coordinate system of the information acquisition equipment, and aiming at the coordinate system of the information acquisition equipmentTo the robot arm end coordinate system at any point a (x, y, z)B (X, Y, Z) in (a) satisfies the expression:

or:

the matrix of the coordinate transformation is therefore:

in the formula (I), the compound is shown in the specification,indicating the coordinate system in which the information-collecting device is locatedTo the base coordinate system of the robot armThe transformation matrix of (2);indicating the coordinate system in which the information-collecting device is locatedTo the end of the arm coordinate systemThe transformation matrix of (2);representing a coordinate system based on a robot armTo the end of the arm coordinate systemThe transformation matrix of (2).

4. The robot grasping method based on three-dimensional data analysis according to claim 3,

when the information acquisition equipment is positioned at the tail end of the intelligent robot mechanical arm, the position of the information acquisition equipment relative to the tail end of the industrial robotFixed in position, converted into pairs according to a coordinate conversion matrixCalibrating;

the above-mentionedIndicating the coordinate system in which the information-collecting device is locatedTo the end of the arm coordinate systemThe transformation matrix of (2).

5. The robot grasping method based on three-dimensional data analysis according to claim 3,

when the information acquisition equipment and the industrial robot are independently installed and fixedly installed relative to the ground, the information acquisition equipment is converted into a pair through a coordinate conversion matrixCalibrating;

the above-mentionedIndicating the coordinate system in which the information-collecting device is locatedTo the base coordinate system of the robot armThe transformation matrix of (2).

6. The robot grabbing method based on three-dimensional data analysis of claim 1,

the flexible attaching mechanical claw is also included in the grabbing process;

the flexible attaching mechanical claw is a bionic flexible attaching mechanical claw constructed according to the characteristic that the fin bends towards the direction of applying force;

during the grabbing process, the flexible attaching mechanical claw can passively adapt to the shape of the target object according to the reverse force applied by the target object.

7. The robot grabbing method based on three-dimensional data analysis of claim 6,

the flexible attaching mechanical claw adopts a stepping motor and is driven by a differential driver, and the driver adopts a single-end common-cathode line connection method.

8. The robot grabbing method based on three-dimensional data analysis of claim 1,

the identification network model comprises an input layer, a sampling layer, a convolution layer, a pooling layer and a full-connection layer; the pooling layer is closely arranged behind the convolution layer, and a downsampling mode is adopted to further compress and remove redundancy of the feature map;

processing the data in the pooling layer by searching for a maximum value in the feature map;

the activation function of the full connection layer adopts a Leaky Re Lu function which is a nonlinear unsaturated function and is used for solving the problem of gradient disappearance and simultaneously can reversely propagate errors and activate a plurality of neurons; if the input value is negative, all negative values will be assigned a non-zero slope.

9. A robot grasping system based on three-dimensional data analysis is used for realizing the method of any one of claims 1 to 6, and is characterized by comprising the following specific steps: the first module is used for constructing a network identification model and a coordinate conversion model, and is configured to construct the identification network model and the coordinate conversion model and place the identification network model and the coordinate conversion model into the intelligent grabbing robot core processor;

a second module for implementing path planning, the module being configured to formulate an actual working route of the intelligent robot according to working requirements;

a third module for acquiring image data, the third module being configured to complete image data acquisition in an actual operating condition according to the received trigger signal;

the fourth module is used for extracting image data information and is arranged for receiving the image data acquired after the third module and carrying out data preprocessing to acquire the three-dimensional data information of the target object;

a fifth module for acquiring the related information of the object, the fifth module being configured to input the three-dimensional data information acquired in the fourth module into the recognition network model for extracting the related information of the object;

a sixth module for generating coordinates of a coordinate system which can be referred to by the intelligent robot, the module being configured to acquire coordinates which can be recognized by the intelligent robot by using a coordinate conversion model and generate a capture instruction;

a seventh module for performing a grabbing action, the module being arranged to perform the grabbing action in accordance with the generated grabbing instruction.

10. The three-dimensional data analysis-based robot gripping system according to claim 9,

after the first module constructs the recognition network model and the coordinate conversion model, the recognition network model and the coordinate conversion model are placed into a core processor of the intelligent grabbing robot, and performance training is further carried out on the recognition network model; in the actual operation process, the second module firstly makes a walking route of the intelligent robot according to the requirement, and generates a trigger signal of the third module after reaching the range of the target object; the third module receives the trigger signal generated by the second module, invokes the information acquisition equipment to acquire image data of actual working conditions, sends the acquired image data to the fourth module for preprocessing, and then enters the fifth module to extract three-dimensional data information; the sixth module converts the position coordinates by adopting a coordinate conversion model according to the acquired information and generates a corresponding grabbing instruction; and the seventh module finishes grabbing according to the generated grabbing instruction.

Technical Field

The invention relates to a robot grabbing method and system based on three-dimensional data analysis, in particular to the technical field of image data processing of a robot.

Background

With the advance of intelligent technology, the development of modern industry wanting to be intelligent is driven, the intelligent industry gradually takes a leading position in the production of actual industrial operation, and intelligent robots are also gradually applied to numerous fields of social production. In the intelligent industrial grabbing operation process, when the intelligent robot reaches the position of a target object, the information acquisition equipment is triggered to acquire image data of the target object, then the position of the target object is acquired through image data processing, and relevant information of the target object is returned, so that the grabbing robot is assisted in grabbing the target object.

In the prior art, because the scattered posture and position of the target object have uncertainty, the situation of grabbing and dropping often occurs in the grabbing operation process, so that the robot can work repeatedly, and the working efficiency of the grabbing robot is reduced.

Disclosure of Invention

The purpose of the invention is as follows: a robot grabbing method and system based on three-dimensional data analysis are provided to solve the problems in the prior art.

The technical scheme is as follows: in a first aspect, a robot grabbing method based on three-dimensional data analysis is provided, which is characterized by specifically comprising the following steps:

step one, constructing a recognition network model and a coordinate conversion model, placing the recognition network model and the coordinate conversion model into a core processor of the intelligent grabbing robot, and performing performance training on the recognition network model;

step two, planning to reach the area where the target object is located according to the path;

triggering information acquisition equipment to acquire image data of the target object;

step four, carrying out data preprocessing on the acquired image data and acquiring three-dimensional data information of the target object;

inputting the three-dimensional data information acquired in the step four into the identification network model to extract the relevant information of the target object; the related information comprises position information and category information;

step six, converting the position information extracted in the step five by using a coordinate conversion model;

and seventhly, generating a grabbing instruction according to the converted coordinates, and completing grabbing according to the grabbing instruction.

In some implementations of the first aspect, when the collected image data is subjected to data preprocessing, a weighted average method is used to perform gray scale image processing on the color image, and after the gray scale is performed, the gray scale image data is converted into black and white binary image data by using a value of 0 or 255.

Wherein the expression of graying is:

in the formula (I), the compound is shown in the specification,representing the image data after the drawing process,representing the weight of the R component in the color image;representing the weight of the G component in the color image;the weight of the B component in the color image.

In some realizations of the first aspect, the converting the position information by using the coordinate transformation model in the sixth step further includes that the information acquisition device is located at the end of the intelligent robot arm and is fixed relative to the ground.

Establishing a mechanical arm base coordinate system, a mechanical arm tail end coordinate system and an information acquisition deviceIn a coordinate system, aiming at the coordinate system of the information acquisition equipmentTo the robot arm end coordinate system at any point a (x, y, z)B (X, Y, Z) in (a) satisfies the expression:

or:

the matrix of the coordinate transformation is therefore:

in the formula (I), the compound is shown in the specification,indicating the coordinate system in which the information-collecting device is locatedTo the base coordinate system of the robot armThe transformation matrix of (2);indicating the coordinate system in which the information-collecting device is locatedTo the end of the arm coordinate systemThe transformation matrix of (2);representing a coordinate system based on a robot armTo the end of the arm coordinate systemThe transformation matrix of (2).

When the information acquisition equipment is positioned at the tail end of the intelligent robot mechanical arm, the position of the information acquisition equipment is fixed relative to the tail end of the industrial robot, and the information acquisition equipment is converted into a pair according to the coordinate conversion matrixAnd (4) calibrating. The above-mentionedIndicating the coordinate system in which the information-collecting device is locatedTo the end of the arm coordinate systemThe transformation matrix of (2).

When the information acquisition equipment and the industrial robot are independently installed and fixedly installed relative to the ground, the information acquisition equipment is converted into a pair through a coordinate conversion matrixAnd (4) calibrating. The above-mentionedIndicating the coordinate system in which the information-collecting device is locatedTo the base coordinate system of the robot armThe transformation matrix of (2).

In some realizations of the first aspect, a flexible fit gripper is further included in the process of completing the gripping process; the flexible attaching mechanical claw is a bionic flexible attaching mechanical claw constructed according to the characteristic that the fin bends towards the direction of applying force; during the grabbing process, the flexible attaching mechanical claw can passively adapt to the shape of the target object according to the reverse force applied by the target object.

The flexible attaching mechanical claw adopts a stepping motor and is driven by a differential driver, and the driver adopts a single-end common-cathode line connection method.

The identification network model comprises an input layer, a sampling layer, a convolution layer, a pooling layer and a full-connection layer; and the pooling layer is immediately followed by the convolution layer, and the feature map is further compressed and redundancy is removed by adopting a down-sampling mode. The data is processed in the pooling layer by finding the maximum in the feature map. The activation function of the full connection layer adopts a Leaky Re Lu function which is a nonlinear unsaturated function and is used for solving the problem of gradient disappearance and simultaneously can reversely propagate errors and activate a plurality of neurons; if the input value is negative, all negative values will be assigned a non-zero slope.

In a second aspect, a robot grasping system based on three-dimensional data analysis is provided, and the system specifically includes: the first module is used for constructing a network identification model and a coordinate conversion model, and is configured to construct the identification network model and the coordinate conversion model and place the identification network model and the coordinate conversion model into the intelligent grabbing robot core processor; a second module for implementing path planning, the module being configured to formulate an actual working route of the intelligent robot according to working requirements; a third module for acquiring image data, the third module being configured to complete image data acquisition in an actual operating condition according to the received trigger signal; the fourth module is used for extracting image data information and is arranged for receiving the image data acquired after the third module and carrying out data preprocessing to acquire the three-dimensional data information of the target object; a fifth module for acquiring the related information of the object, the fifth module being configured to input the three-dimensional data information acquired in the fourth module into the recognition network model for extracting the related information of the object; a sixth module for generating coordinates of a coordinate system which can be referred to by the intelligent robot, the module being configured to acquire coordinates which can be recognized by the intelligent robot by using a coordinate conversion model and generate a capture instruction; a seventh module for performing a grabbing action, the module being arranged to perform the grabbing action in accordance with the generated grabbing instruction.

In some implementation manners of the second aspect, after the first module constructs the recognition network model and the coordinate transformation model, the first module is placed into a core processor of the intelligent grabbing robot, and further performs performance training on the recognition network model; in the actual operation process, the second module firstly makes a walking route of the intelligent robot according to the requirement, and generates a trigger signal of the third module after reaching the range of the target object; the third module receives the trigger signal generated by the second module, invokes the information acquisition equipment to acquire image data of actual working conditions, sends the acquired image data to the fourth module for preprocessing, and then enters the fifth module to extract three-dimensional data information; the sixth module converts the position coordinates by adopting a coordinate conversion model according to the acquired information and generates a corresponding grabbing instruction; and the seventh module finishes grabbing according to the generated grabbing instruction.

Has the advantages that: in the grabbing operation process of the intelligent grabbing robot, aiming at the phenomena of high technical visual angle sensitivity and low grabbing efficiency caused by the problems of poor portability and further uncertainty of the posture and the position of a target object in the two-dimensional image recognition technology in the prior art, the robot grabbing method based on three-dimensional data analysis is provided, and more accurate posture information of the target object is further obtained by obtaining the three-dimensional information of the object and reading and analyzing the three-dimensional information by a recognition network model, so that the positioning accuracy of the target object is improved, and the intelligent grabbing robot is guided to efficiently control and complete the grabbing task.

Drawings

FIG. 1 is a flow chart of data processing according to an embodiment of the present invention.

Fig. 2 is a schematic diagram of the circuit connection between the motor and the driver according to the embodiment of the present invention.

Detailed Description

In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.

Example one

In the grabbing process of the intelligent grabbing robot, as the two-dimensional image recognition technology in the prior art has the problems of high technical view angle sensitivity and poor transportability, the problems of uncertainty of the posture and the position of a target object and low grabbing efficiency further caused by the problems are solved, and the robot grabbing method based on three-dimensional data analysis is provided, and specifically comprises the following steps:

step one, constructing a recognition network model and a coordinate conversion model, placing the recognition network model and the coordinate conversion model into a core processor of the intelligent grabbing robot, and performing performance training on the recognition network model;

step two, the planning reaches the area where the target object is located according to the path;

triggering information acquisition equipment to acquire image data of the target object;

step four, carrying out data preprocessing on the acquired image data and acquiring three-dimensional data information of the target object;

inputting the three-dimensional data information acquired in the step four into the identification network model to extract the relevant information of the target object;

step six, converting the position information extracted in the step five by using a coordinate conversion model;

and seventhly, generating a grabbing instruction according to the converted coordinates, and completing grabbing according to the grabbing instruction.

Through acquiring the three-dimensional information of the object and recognizing the reading and analyzing of the three-dimensional information by the network model, more accurate attitude information of the target object is further acquired, so that the positioning accuracy of the target object is improved, and the intelligent grabbing robot is guided to efficiently control and complete the grabbing task.

Example two

In a further embodiment based on the first embodiment, when data preprocessing is performed on the acquired image data, in order to extract image features in the image data well, the present embodiment adopts an image gray processing mode to reduce the problems of excessive data information amount and long processing time consumption caused by color.

Specifically, in order to obtain more complete image information and less image noise, the present embodiment performs gray scale image processing on a color image by using a weighted average method, where the processing expression is as follows:

in the formula (I), the compound is shown in the specification,representing the image data after the drawing process,representing the weight of the R component in the color image;representing the weight of the G component in the color image;the weight of the B component in the color image.

In a further embodiment, in order to further simplify the flow of image data processing, a black-and-white binarized image is generated using a value of 0 or 255 for the grayed image data so that the feature information of the image is related only to the positions of the image pixels, thereby highlighting the contour of the target object.

EXAMPLE III

In a further embodiment based on the first embodiment, in order to support the hand-eye coordination operation of the intelligent gripping robot, after the actual position of the target object is obtained, the actual position coordinates obtained by the information acquisition device need to be converted into coordinates in a coordinate system adopted by the intelligent gripping robot. The grabbing behavior of the intelligent grabbing robot depends on the converted coordinate system, so that the accuracy after coordinate conversion has a large influence on the grabbing behavior. In the embodiment, a coordinate transformation method is used for improving the coordinate transformation result aiming at the accuracy of the coordinate transformation.

Specifically, in the industrial implementation process, when the target object is positioned, because the information acquisition equipment exists at the tail end of the mechanical arm of the intelligent robot and is fixed relative to the ground, a mechanical arm base coordinate system, a mechanical arm tail end coordinate system and a coordinate system where the information acquisition equipment is located are established at first, and different conversions of the coordinate systems are realized according to the positions of the equipment coordinate system relative to the industrial robot.

When the information acquisition equipment is positioned at the tail end of the intelligent robot mechanical arm, the position of the information acquisition equipment is fixed relative to the tail end of the industrial robot, and the information acquisition equipment is converted into a pair according to the coordinate conversion matrixCalibrating; when the information collecting device is installed independently from the industrial robot and is fixedly installed with respect to the ground, since it varies with the movement of the end of the robot,is not a fixed value and is difficult to calibrate, so the coordinate transformation matrix is used for transforming the coordinate transformation matrix into the pairCalibrating; wherein the content of the first and second substances,indicating the coordinate system in which the information-collecting device is locatedTo the base coordinate system of the robot armThe transformation matrix of (2);indicating the coordinate system in which the information-collecting device is locatedTo the end of the arm coordinate systemThe transformation matrix of (2);representing a coordinate system based on a robot armTo the end of the arm coordinate systemThe transformation matrix of (2).

In a further embodiment, the coordinate system is used for the information acquisition equipmentTo the robot arm end coordinate system at any point a (x, y, z)B (X, Y, Z) in (a) satisfies the expression:

or:

the matrix of the coordinate transformation is therefore:

in the formula (I), the compound is shown in the specification,indicating the coordinate system in which the information-collecting device is locatedTo the base coordinate system of the robot armThe transformation matrix of (2);indicating the coordinate system in which the information-collecting device is locatedTo the end of the arm coordinate systemThe transformation matrix of (2);representing a coordinate system based on a robot armTo the end of the arm coordinate systemThe transformation matrix of (2).

Example four

In a further embodiment based on the first embodiment, in the process of realizing the grabbing operation, when the characteristics of the object to be grabbed have uncertainty, the surface of the object is often damaged or the object falls off in the grabbing process due to the problems of insufficient hardness, uneven quality and the like of the object. Consequently to the phenomenon that snatchs that this kind of problem leads to is not high, this embodiment provides a flexible laminating gripper for snatch the operation in-process, flexible laminating gripper can adapt to the shape characteristics of target passively, thereby realizes firmly the purpose of snatching.

Specifically, the bionic flexible attaching mechanical claw is constructed according to the characteristic that the fin bends towards the force applying direction, so that the mechanical claw can passively adapt to the shape of the target object according to the reverse force applied by the target object, and the mechanical claw has stronger universality on the premise of not damaging the target object.

In a further embodiment, the flexible attaching gripper proposed in this embodiment uses a stepping motor and is driven by a differential driver, and the driver uses a single-ended common-cathode line connection method. Referring to fig. 2, which shows the circuit connection between the motor and the driver, the driver has a subdivision number of 16 in this embodiment, and has ENA + signal interfaces, ENA-signal interfaces, DIR + signal interfaces, DIR-signal interfaces, CLK + signal interfaces, CLK-signal interfaces, power signal interfaces, ground signal interfaces, and 4 winding connection lines. The motor interface comprises an A phase winding, a B phase winding, a motor interface, an ENA + connection enable signal, a CLK + connection control pulse signal and a DIR + connection direction signal, wherein the ENA + connection enable signal, the CLK + connection control pulse signal and the DIR + connection direction signal are connected with a grounding signal together, and a winding connection circuit is correspondingly connected with the positive end and the negative end of an AB phase winding of the motor interface respectively.

EXAMPLE five

In a further embodiment based on the first embodiment, a three-dimensional recognition network model is provided for solving the problems of positioning error and low capture positioning precision caused by the condition that the two-dimensional recognition is inclined at a shooting angle and the like, the model classifies image data through an optimized convolutional neural network, and recognition analysis is carried out by combining the technical scheme of a classifier after classification. Through improving the recognition result, the cognition of the intelligent grabbing robot to the target object can be improved, the grabbing efficiency of the intelligent grabbing robot is improved, the grabbing operation is given up to the defect that a non-target object or the target object has problems, and the working condition of invalid operation is reduced.

Specifically, the three-dimensional identification network model comprises an input layer, a sampling layer, a convolution layer, a pooling layer and a full-connection layer, wherein the pooling layer is next to the convolution layer, and the feature map is further compressed and redundancy is removed by adopting a down-sampling mode. The data are processed in the pooling layer in a mode of finding the maximum value in the feature map, so that the calculation time can be reduced, and the robustness of features of different spatial positions is improved. The activation function of the full connection layer selects an improved function Leaky Re Lu of a modified linear function (Re LU), and the function is an unsaturated function and can effectively solve the problem of gradient disappearance. In addition, the function is non-linear and can back-propagate errors and activate multiple neurons. If the input value is negative, all negative values are assigned with a non-zero slope, so that the problem caused by setting the negative value to 0 in the training process of the Re Lu function can be avoided.

After classification is finished, a classifier is adopted to combine with a threshold value for screening, so that a final result is obtained, and the technical scheme of combined classification can effectively improve the identification precision by 5-8%.

EXAMPLE six

The embodiment provides a robot grasping system based on three-dimensional data analysis, which is used for implementing the method provided by the embodiment, and the system specifically comprises:

the first module is used for constructing a network identification model and a coordinate conversion model, and is configured to construct the identification network model and the coordinate conversion model and place the identification network model and the coordinate conversion model into the intelligent grabbing robot core processor;

a second module for implementing path planning, the module being configured to formulate an actual working route of the intelligent robot according to working requirements;

a third module for acquiring image data, the third module being configured to complete image data acquisition in an actual operating condition according to the received trigger signal;

the fourth module is used for extracting image data information and is arranged for receiving the image data acquired after the third module and carrying out data preprocessing to acquire the three-dimensional data information of the target object;

a fifth module for acquiring the related information of the object, the fifth module being configured to input the three-dimensional data information acquired in the fourth module into the recognition network model for extracting the related information of the object;

a sixth module for generating coordinates of a coordinate system which can be referred to by the intelligent robot, the module being configured to acquire coordinates which can be recognized by the intelligent robot by using a coordinate conversion model and generate a capture instruction;

a seventh module for performing a grabbing action, the module being arranged to perform the grabbing action in accordance with the generated grabbing instruction.

In a further embodiment, after the first module constructs the recognition network model and the coordinate transformation model, the recognition network model and the coordinate transformation model are placed into a core processor of the intelligent grabbing robot, and performance training is further performed on the recognition network model. In the actual operation process, the second module firstly makes a walking route of the intelligent robot according to the requirement, and generates a trigger signal of the third module after the intelligent robot reaches the range of the target object. The third module receives the trigger signal generated by the second module, the information acquisition equipment is called to acquire image data of actual working conditions, and the acquired image data are sent to the fourth module for preprocessing and then enter the fifth module to extract three-dimensional data information. And the sixth module converts the position coordinates by adopting a coordinate conversion model according to the acquired information and generates a corresponding grabbing instruction. And the seventh module finishes grabbing according to the generated grabbing instruction.

As noted above, while the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limited thereto. Various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种绳驱柔性摆动机构

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!