3D vision-based hybrid unstacking and stacking method

文档序号:499965 发布日期:2021-05-28 浏览:15次 中文

阅读说明:本技术 一种基于3d视觉的混合拆码垛方法 (3D vision-based hybrid unstacking and stacking method ) 是由 龚隆有 张敏 明鹏 邹泓兵 谢先武 黄永安 于 2021-01-08 设计创作,主要内容包括:本发明提供了一种基于3D视觉的混合拆码垛方法,包括在用于装载箱体的托盘上设置重力感应装置,实时获取托盘上箱体的水平重心位置;设置3D视觉装置对箱体的尺寸和边缘位置进行识别,引导机械手对箱体进行抓取;在机械手上设置抓取称重装置,在对箱体进行抓取的同时检测箱体的重量;根据托盘上箱体的水平重心位置、箱体的尺寸和边缘位置以及箱体的重量,确定拆码垛的顺序,并利用机械手完成混合拆码垛。通过上述方式,本发明能够利用重力感应装置实时获取托盘上箱体的整体重心位置,对拆码垛顺序及码放位置进行调整,保证拆码垛过程的安全;并利用3D视觉装置识别箱体的大小及边缘,提高托盘上的空间利用率,实现稳定高效的混合拆码垛过程。(The invention provides a 3D vision-based mixed unstacking and stacking method, which comprises the steps that a gravity sensing device is arranged on a tray for loading a box body, and the horizontal gravity center position of the box body on the tray is obtained in real time; a 3D vision device is arranged to identify the size and the edge position of the box body, and a guiding manipulator is used for grabbing the box body; a grabbing and weighing device is arranged on the manipulator, and the weight of the box body is detected while the box body is grabbed; and determining the pile removing sequence according to the horizontal gravity center position of the box body on the tray, the size and the edge position of the box body and the weight of the box body, and completing mixed pile removing by utilizing a manipulator. Through the mode, the gravity sensing device can be used for acquiring the integral gravity center position of the box body on the tray in real time, and the unstacking and stacking sequence and the stacking position are adjusted, so that the safety of the unstacking and stacking process is ensured; and the 3D vision device is used for identifying the size and the edge of the box body, so that the space utilization rate on the tray is improved, and the stable and efficient mixed pile removing and stacking process is realized.)

1. A3D vision-based hybrid unstacking and stacking method is characterized by comprising the following steps:

a gravity sensing device is arranged on a tray for loading the box body, and the horizontal gravity center position of the box body on the tray is obtained in real time;

a 3D vision device is arranged to identify the size and the edge position of the box body, and a guiding manipulator is used for grabbing the box body;

the manipulator is provided with a grabbing and weighing device, and the weight of the box body is detected while the box body is grabbed;

and determining the pile removing sequence according to the horizontal gravity center position of the box body on the tray, the size and the edge position of the box body and the weight of the box body, and completing mixed pile removing by utilizing the manipulator.

2. A 3D vision based hybrid unstacking method according to claim 1, characterized in that: the stacking mode and the unstacking mode are included; the stacking mode specifically comprises the following steps:

a1, arranging the 3D vision device at the front end of a conveying device for conveying the boxes to be stacked, and detecting the size and the edge position of each box to be stacked;

a2, determining a stacking sequence and stacking positions of the boxes according to the sizes of the boxes measured in the step A1;

a3, according to the stacking sequence determined in the step A2 and the edge position of the box body measured in the step A1, guiding a mechanical hand to grab the box body according to the stacking sequence;

a4, detecting the weight of the box body in the grabbing process by using the grabbing and weighing device arranged on the manipulator, and detecting the horizontal gravity center position of the box body on the tray by using the gravity sensing device arranged on the tray;

a5, calculating the horizontal gravity center position and the vertical gravity center position of the box body on the tray after stacking according to the size, the weight and the stacking position of the box body grabbed by the manipulator, and judging whether the horizontal gravity center position and the vertical gravity center position are within a preset gravity center safety threshold value; and if the gravity center is within the safety threshold value of the gravity center, guiding the mechanical arm to place the grabbed box bodies on the tray according to the stacking position code.

3. A 3D vision based hybrid unstacking method according to claim 1, characterized in that: the unstacking mode specifically comprises the following steps:

b1, detecting the horizontal gravity center position of the box body on the tray by using the gravity sensing device arranged on the tray;

b2, identifying the size and the edge position of each box body on the tray by using the 3D vision device, and guiding a manipulator to perform trial grabbing on the box bodies according to a preset unstacking sequence;

b3, when the manipulator tries to grab the box body, the grabbing and weighing device is used for detecting the weight of the box body, and a revised unstacking sequence is generated according to the change condition of the horizontal gravity center position of the box body on the tray;

and B4, the manipulator grabs the boxes according to the revised unstacking sequence and moves out of the tray to complete unstacking.

4. A 3D vision based hybrid unstacking method according to claim 2, characterized in that: in step a2, the stacking sequence and the stacking position of each box body are determined according to the size of the box body; when the box body with the size smaller than the preset value is detected, a stacking space is reserved for the box body when the stacking position is set.

5. A 3D vision based hybrid unstacking method according to claim 2, characterized in that: in step a5, if the horizontal and vertical center of gravity positions of the box on the tray are not within the preset center of gravity safety threshold, the stacking position of the box is regenerated.

6. A 3D vision based hybrid unstacking method according to claim 3, characterized in that: in step B3, when the change of the horizontal gravity center position of the box body on the tray is within a safety threshold, the revised unstacking sequence is consistent with the preset unstacking sequence; and when the change of the horizontal gravity center position of the box body on the tray exceeds a safety threshold, the revised unstacking sequence is generated according to a preset rule.

7. A 3D vision based hybrid unstacking method according to claim 1, characterized in that: the 3D vision device comprises a 3D camera, a ranging sensor and an information processing system for processing the information collected by the 3D camera and the ranging sensor.

8. The 3D vision based hybrid unstacking method according to claim 7, wherein: the information processing system comprises an image segmentation module and a box body identification module; the image segmentation module is electrically connected with the 3D camera and used for segmenting image data acquired by the 3D camera and identifying the edge contour of the box body; the box body identification module is electrically connected with the image segmentation module and the distance measurement sensor respectively and used for calculating the size and edge coordinates of the box body.

9. A3D vision based hybrid unstacking method according to any one of claims 1-8, wherein: the gravity sensing device comprises a plurality of gravity sensors which are uniformly distributed on the surface of the tray.

10. A 3D vision based hybrid unstacking method according to any one of claims 1-9, wherein: the grabbing and weighing device comprises a weighing platform and a weighing sensor arranged on the surface of the weighing platform.

Technical Field

The invention relates to the technical field of intelligent unstacking and stacking, in particular to a 3D vision-based mixed unstacking and stacking method.

Background

In recent years, with the development of science and technology and the great progress of automation technology, various manipulators or robots have gradually replaced manual work for production, greatly improve the production efficiency while freeing labor force, and are widely applied to various fields. In the pile up neatly field of breaking a yard, traditional manual work pile up neatly of tearing open not only working strength is big, work efficiency is low, still can bring great health burden for the staff, and then influences the quality of breaking a yard and buttress. Based on the problem that manual unstacking and stacking exists, the development trend that automatic unstacking and stacking is carried out by utilizing a mechanical arm or an industrial robot instead of manpower is gradually in the field of unstacking and stacking is gradually developed.

At present, the automatic unstacking and stacking technology mainly identifies the position of a box body through a vision technology and then controls a manipulator to grab the box body. However, in the practical application process, this method is only suitable for unstacking and stacking boxes with the same shape, size and weight, and for the case that boxes with different sizes and weights are mixed together, the problems of uneven stacking, insufficient space utilization rate, easy collapse of boxes, etc. are easily generated during unstacking and stacking, which affects the normal operation of the unstacking and stacking process.

The patent with publication number CN108820901A provides a pile up neatly and intelligent sorting system are torn open in discernment of robot 3D vision intelligence, and this system carries out information acquisition to shape, size and the coordinate of the article that need tear the pile up neatly through adopting 3D vision technique, later carries out the planning operation of tearing the pile up neatly to convert the orbit into the robot, finally make the robot tear the pile up neatly operation open according to the orbit, thereby improve intelligent degree, tear the pile up neatly in order to adapt to the mixture of different specification article. However, the method can only perform planning operation according to the shape and size of the objects to be unstacked, and for the boxes with different weights, the problems of unstable gravity center and easy collapse of the boxes still occur in the process of unstacking and stacking, so that the smooth operation of the unstacking and stacking process is seriously influenced.

In view of the above, there is still a need to design an improved 3D vision-based hybrid unstacking method to solve the above problems.

Disclosure of Invention

In view of the above-mentioned drawbacks of the prior art, it is an object of the present invention to provide a 3D vision based hybrid unstacking method. The gravity sensing device is arranged on the tray for loading the stacked box bodies, so that the integral gravity center position of the box bodies on the tray is obtained in real time, the unstacking and stacking sequence and the stacking position are conveniently adjusted, and the stability of the box bodies in the unstacking and stacking process is ensured; the size and the edge position of the box body are identified by the 3D vision device, the stacking sequence is confirmed when the mechanical arm is used for mechanical grabbing, the space utilization rate on the tray is improved, and therefore the stable and efficient mixed stacking and unstacking process is achieved, and the requirements of practical application are met.

In order to achieve the above object, the present invention provides a 3D vision-based hybrid unstacking and stacking method, comprising the steps of:

a gravity sensing device is arranged on a tray for loading the box body, and the horizontal gravity center position of the box body on the tray is obtained in real time;

a 3D vision device is arranged to identify the size and the edge position of the box body, and a guiding manipulator is used for grabbing the box body;

the manipulator is provided with a grabbing and weighing device, and the weight of the box body is detected while the box body is grabbed;

and determining the pile removing sequence according to the horizontal gravity center position of the box body on the tray, the size and the edge position of the box body and the weight of the box body, and completing mixed pile removing by utilizing the manipulator.

As a further improvement of the invention, the 3D vision-based hybrid unstacking and stacking method comprises a stacking mode and a unstacking mode; the stacking mode specifically comprises the following steps:

a1, arranging the 3D vision device at the front end of a conveying device for conveying the boxes to be stacked, and detecting the size and the edge position of each box to be stacked;

a2, determining a stacking sequence and stacking positions of the boxes according to the sizes of the boxes measured in the step A1;

a3, according to the stacking sequence determined in the step A2 and the edge position of the box body measured in the step A1, guiding a mechanical hand to grab the box body according to the stacking sequence;

a4, detecting the weight of the box body in the grabbing process by using the grabbing and weighing device arranged on the manipulator, and detecting the horizontal gravity center position of the box body on the tray by using the gravity sensing device arranged on the tray;

a5, calculating the horizontal gravity center position and the vertical gravity center position of the box body on the tray after stacking according to the size, the weight and the stacking position of the box body grabbed by the manipulator, and judging whether the horizontal gravity center position and the vertical gravity center position are within a preset gravity center safety threshold value; and if the gravity center is within the safety threshold value of the gravity center, guiding the mechanical arm to place the grabbed box bodies on the tray according to the stacking position code.

As a further improvement of the present invention, the unstacking mode specifically comprises the following steps:

b1, detecting the horizontal gravity center position of the box body on the tray by using the gravity sensing device arranged on the tray;

b2, identifying the size and the edge position of each box body on the tray by using the 3D vision device, and guiding a manipulator to perform trial grabbing on the box bodies according to a preset unstacking sequence;

b3, when the manipulator tries to grab the box body, the grabbing and weighing device is used for detecting the weight of the box body, and a revised unstacking sequence is generated according to the change condition of the horizontal gravity center position of the box body on the tray;

and B4, the manipulator grabs the boxes according to the revised unstacking sequence and moves out of the tray to complete unstacking.

As a further improvement of the present invention, in step a2, the palletizing order and the stacking position of each box are determined according to the size of the box; when the box body with the size smaller than the preset value is detected, a stacking space is reserved for the box body when the stacking position is set.

As a further improvement of the present invention, in step a5, if the horizontal center of gravity position and the vertical center of gravity position of the box on the tray are not within the preset center of gravity safety threshold, the stacking position of the box is regenerated.

As a further improvement of the present invention, in step B3, when the change of the horizontal gravity center position of the box body on the pallet is within a safety threshold, the revised unstacking order is consistent with the preset unstacking order; and when the change of the horizontal gravity center position of the box body on the tray exceeds a safety threshold, the revised unstacking sequence is generated according to a preset rule.

As a further improvement of the invention, the 3D vision device comprises a 3D camera, a ranging sensor and an information processing system for processing information acquired by the 3D camera and the ranging sensor.

As a further improvement of the invention, the information processing system comprises an image segmentation module and a box body identification module; the image segmentation module is electrically connected with the 3D camera and used for segmenting image data acquired by the 3D camera and identifying the edge contour of the box body; the box body identification module is electrically connected with the image segmentation module and the distance measurement sensor respectively and used for calculating the size and edge coordinates of the box body.

As a further improvement of the invention, the gravity sensing device comprises a plurality of gravity sensors uniformly distributed on the surface of the tray.

As a further improvement of the invention, the grabbing and weighing device comprises a weighing platform and a weighing sensor arranged on the surface of the weighing platform.

The invention has the beneficial effects that:

(1) according to the invention, the gravity sensing device is arranged on the tray for loading the stacked boxes, so that the integral gravity center position of the boxes on the tray can be obtained in real time, the unstacking and stacking sequence and the stacking position can be conveniently adjusted, the stability of the boxes in the unstacking and stacking process is ensured, the problem that the boxes are easy to collapse due to unstable gravity center of the boxes is avoided, and the safety of the unstacking and stacking process is improved. Meanwhile, the size and the edge position of the box body are identified by arranging the 3D vision device, the stacking sequence is confirmed when the mechanical arm mechanically grips, and the space utilization rate on the tray is effectively improved, so that the stable and efficient mixed unstacking and stacking process is realized, and the requirement of practical application is met.

(2) In the stacking process, the horizontal gravity center of the box body on the tray and the size and weight of each box body can be accurately obtained by using the gravity sensing device, the 3D vision device and the grabbing and weighing device in the stacking process, and the vertical gravity center of the box body on the tray is calculated, so that the safety of the stacking process is comprehensively and effectively guaranteed, and the box body collapse accident is avoided. Meanwhile, the stacking sequence and the stacking position of each box body can be determined according to the obtained box body size, the stacking space is reserved for the small-sized box bodies, and the space utilization rate on the tray is effectively improved.

(3) In the unstacking process, the mechanical hand is used for trying to grab the box bodies, whether the grabbing sequence is reasonable or not is checked according to the weight of the box bodies measured in the trying grabbing process and the change situation of the horizontal gravity center position of the box bodies on the tray, the safety and the stability of the unstacking process are effectively guaranteed, the unstacking efficiency is improved, and the practical application value is high.

Drawings

Fig. 1 is a schematic flow chart of a 3D vision-based hybrid unstacking method according to an embodiment of the present invention.

Fig. 2 is a schematic diagram of a palletizing process provided in an embodiment of the present invention.

Fig. 3 is a schematic diagram of post-palletizing box warehousing statistics provided in an embodiment of the present invention.

Figure 4 is a schematic illustration of the unstacking process provided in one embodiment of the invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.

It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the structures and/or processing steps closely related to the aspects of the present invention are shown in the drawings, and other details not closely related to the present invention are omitted.

In addition, it is also to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

The invention provides a 3D vision-based mixed unstacking and stacking method, which comprises the following steps of:

a gravity sensing device is arranged on a tray for loading the box body, and the horizontal gravity center position of the box body on the tray is obtained in real time;

a 3D vision device is arranged to identify the size and the edge position of the box body, and a guiding manipulator is used for grabbing the box body;

the manipulator is provided with a grabbing and weighing device, and the weight of the box body is detected while the box body is grabbed;

and determining the pile removing sequence according to the horizontal gravity center position of the box body on the tray, the size and the edge position of the box body and the weight of the box body, and completing mixed pile removing by utilizing the manipulator.

Specifically, in an embodiment of the present invention, a schematic flow chart of the 3D vision-based hybrid unstacking and stacking method is shown in fig. 1, and specifically includes the following steps:

after the conveyor for conveying the boxes to be palletized starts to operate, the trolley with the tray for loading the boxes moves to a palletizing position. The 3D vision device arranged at the front end of the conveying device starts to photograph the box body, obtains the size and edge coordinates of the box body according to the collected image and the position information of the collected image, then carries out stacking by a manipulator (as shown in figure 2), and finishes the stacking process after the stacking is full.

At this time, the trolley with the tray is transported to a warehouse door with the stacked boxes, the 3D vision device is used for shooting the boxes again, the size and the edge coordinates of each box are obtained according to the collected images and the position information of the boxes, the size and the edge coordinates are uploaded as warehouse entry information, and the trolley warehouse entry is completed (as shown in figure 3).

After the trolley is put in a warehouse, the trolley is moved to the unstacking position, unstacking is carried out by the manipulator (as shown in figure 4), the unstacking process is completed after the stack is empty, and the mixed unstacking process is finished.

In the stacking process, the method specifically comprises the following steps:

a1, arranging a 3D vision device at the front end of a conveying device for conveying the boxes to be stacked, and detecting the size and the edge position of each box to be stacked.

The 3D vision device comprises a 3D camera, a ranging sensor and an information processing system for processing information collected by the 3D camera and the ranging sensor.

The information processing system comprises an image segmentation module and a box body identification module; the image segmentation module is electrically connected with the 3D camera and used for segmenting image data acquired by the 3D camera and identifying the edge contour of the box body; the box body identification module is electrically connected with the image segmentation module and the distance measurement sensor respectively and used for calculating the size and edge coordinates of the box body.

Specifically, when the 3D camera acquires image data of the box body, the image segmentation module performs image segmentation after preprocessing the acquired image data, and the specific steps are as follows:

a11, firstly, carrying out binarization processing on a box body image acquired by a 3D camera by adopting a binarization processing algorithm, and increasing the contrast of the image so as to accurately segment the box body and the background;

a12, removing noise in the box body image by adopting a median filtering algorithm, and improving the edge detection precision;

and A13, extracting edge pixels of the box body by adopting an edge detection algorithm, and segmenting the edge contour of the box body.

After the identification of the edge profile of the box body is finished, the box body identification module further calculates the size and edge coordinates of the box body based on the obtained edge profile of the box body and information collected by the distance measurement sensor, and the method comprises the following specific steps:

a14, establishing a first coordinate system in the collected image according to the edge contour of the box body, and obtaining the size (a) of the box body in the first coordinate system1,b1,c1) And coordinates P of the edge of the boxi(xi1,yi1,zi1) (ii) a Wherein, a1,b1,c1Respectively representing the length, width and height of the box body in a first coordinate system, Pi(xi1,yi1,zi1) Representing the three-dimensional coordinates of a certain point i on the edge of the box body in a first coordinate system;

a15, establishing a second coordinate system based on the position of the manipulator, and converting the size of the box body and the coordinates of the edge of the box body in the first coordinate system into the second coordinate system according to the actual distance acquired by the distance measuring sensor, namely obtaining the size of the box body (a)2,b2,c2) And coordinates P of the edge of the boxi(xi2,yi2,zi2)。

Based on the mode, the 3D vision device can accurately identify the size and the edge position of the box body and convert the size and the edge position into a coordinate system where the manipulator is located, so that the manipulator is guided to accurately grab the box body. After the box size and the edge position information are obtained, the following steps are continuously carried out:

and A2, determining the stacking sequence and the stacking position of each box body according to the size of each box body measured in the step A1.

The stacking sequence is determined according to the size of the box bodies on the basis of the box body conveying sequence output by the conveying device. In one embodiment of the invention, after the box size identification is completed, the box with the size larger than the preset range is marked as a large box, the box with the size smaller than the preset range is marked as a small box, the large boxes are sorted in advance on the basis of the box conveying sequence, and the small boxes are stacked in the sequence from large to small after the sorting of the small boxes is pushed.

Meanwhile, in the process of stacking the large-sized boxes, whether redundant space exists after the large-sized boxes are stacked on the first layer or not is calculated according to the area of the tray and the size of the boxes, if the small-sized boxes can be put down by the redundant space, the space is reserved for the small-sized boxes, stacking of the second layer is continued, and after the small-sized boxes are turned to, the reserved space is placed on the small-sized boxes, so that the utilization rate of the space on the tray is improved.

A3, according to the stacking sequence determined in the step A2 and the edge position of the box body measured in the step A1, the guiding manipulator grabs the box body according to the stacking sequence.

A4, detecting the weight of the box body in the grabbing process by using the grabbing and weighing device arranged on the manipulator, and detecting the horizontal gravity center position of the box body on the tray by using the gravity sensing device arranged on the tray.

The gravity sensing device comprises a plurality of gravity sensors which are uniformly distributed on the surface of the tray; the grabbing and weighing device comprises a weighing platform and a weighing sensor arranged on the surface of the weighing platform.

In one embodiment of the present invention, the gravity sensing device comprises three gravity sensors disposed at the bottom of the tray, and the coordinates of the three gravity sensors on the tray are respectively (a)1,b1)、(a2,b2) And (a)3,b3) The collected gravity is respectively represented as G1、G2And G3(ii) a The gravity of the pallet itself is denoted as g, and the coordinates of the horizontal center of gravity of the pallet are (x, y).

Based on the force balance in the equilibrium state, the following equation can be obtained:

∑FZ=0 G+g-G1-G2-G3=0

∑My=0 GX+ga-G1a1-G2a2-G3a3=0

∑Mx=0 GY+gb-G1b1-G2b2-G3b3=0

from the above equation one can derive:

G=G1+G2+G3-g

X=(G1a1+G2a2+G3a3-ga)/G

Y=(G1b1-G2b2-G3b3-gb)/G

wherein G is the integral gravity of the box body on the tray, and X, Y represents the horizontal abscissa and the vertical ordinate of the integral gravity center of the box body on the tray in the horizontal direction respectively.

A5, after detecting the horizontal gravity center of the box body on the current tray, according to the size and the weight of the box body which is grabbed by the current manipulator and the stacking position determined in the step A2, a three-dimensional model is established before actual stacking, the horizontal gravity center position and the vertical gravity center position of the box body on the tray after the box body is actually stacked are calculated by the three-dimensional model, and whether the horizontal gravity center position and the vertical gravity center position are within a preset gravity center safety threshold value or not is judged.

And if the calculated horizontal gravity center position and the calculated vertical gravity center position are both within the gravity center safety threshold, the current operation is feasible, and therefore the manipulator is guided to place the grabbed box bodies on the tray according to the stacking position codes.

If any one of the calculated horizontal gravity center position and the calculated vertical gravity center position exceeds the range of the gravity center safety threshold, indicating that the current operation has risk, regenerating the stacking position of the box body, calculating again whether the corresponding horizontal gravity center position and the corresponding vertical gravity center position under the new stacking position are within the preset gravity center safety threshold according to the new stacking position, and if the horizontal gravity center position and the vertical gravity center position are not within the range, continuously repeating the step until the newly generated stacking position of the box body can meet the conditions.

In one embodiment of the invention, the regenerated box stacking position is a position adjacent to the original position, and the orientation of the regenerated box stacking position is determined according to the deviation condition of the horizontal gravity center and the vertical gravity center. For example, when the calculated horizontal center of gravity position is to the left of the center of gravity safety threshold, the regenerated bin stacking position is located adjacent to the right of the home position.

Based on the mode, the invention can accurately grab the box body for stacking, simultaneously ensure the safety of the stacking process and improve the space utilization rate on the tray.

After the stacking is finished, the unstacking process specifically comprises the following steps:

and B1, detecting the horizontal gravity center position of the box body on the tray by using the gravity sensing device arranged on the tray.

The method for detecting the horizontal center of gravity is the same as that in step a4, and will not be described herein again.

And B2, identifying the size and the edge position of each box body on the tray by using the 3D vision device, and guiding a manipulator to try to grab the box bodies according to a preset unstacking sequence.

The identification mode of the 3D vision device for the size and the edge position of the box body is the same as that in step a1, and is not described herein again.

And B3, when the manipulator tries to grab the box body, detecting the weight of the box body by using the grabbing weighing device, and generating a revised unstacking sequence according to the change condition of the horizontal gravity center position of the box body on the tray.

And when the change of the horizontal gravity center position of the box body on the tray is within a safety threshold, the current unstacking sequence is feasible, and the revised unstacking sequence is still consistent with the preset unstacking sequence.

And when the change of the horizontal gravity center position of the box body on the tray exceeds a safety threshold value, indicating that the current unstacking sequence has risk, and regenerating a revised unstacking sequence different from the preset unstacking sequence.

In one embodiment of the invention, the preset unstacking sequence is formulated from top to bottom, from left to right and from front to back, namely, the first row of boxes at the top layer are unstacked from left to right, then the second row of boxes are unstacked from left to right, and so on, and after the top layer is unstacked, the next layer is unstacked in the same way.

When the unstacking sequence needs to be revised, the adjustment is carried out according to the change condition of the horizontal gravity center position of the box body on the tray. For example, when trying to grab, the horizontal gravity center position of the box body a is found to be deviated to the right, the box body a is put down, the box body b which is bilaterally symmetrical to the box body a and is positioned on the right side of the box body a is grabbed, then the box body b is continuously unstacked to the right, and when the rightmost end is reached, unstacking is carried out from the box body a from the left to the right again until the unstacking is finished in the current row, and then the box body a is transferred to the next row for unstacking.

B4, continuously grabbing the boxes and moving out the tray by the manipulator according to the revised unstacking sequence until the stack is empty, and completing unstacking.

Through the mode, the invention can check whether the grabbing sequence is reasonable according to the weight of the box body measured in the trial grabbing process and the change condition of the horizontal gravity center position of the box body on the tray, effectively ensures the safety and stability of the unstacking process and improves the unstacking efficiency.

In conclusion, the invention provides a 3D vision-based hybrid unstacking and stacking method, which comprises the steps that a gravity sensing device is arranged on a tray for loading a box body, and the horizontal gravity center position of the box body on the tray is obtained in real time; a 3D vision device is arranged to identify the size and the edge position of the box body, and a guiding manipulator is used for grabbing the box body; a grabbing and weighing device is arranged on the manipulator, and the weight of the box body is detected while the box body is grabbed; and determining the pile removing sequence according to the horizontal gravity center position of the box body on the tray, the size and the edge position of the box body and the weight of the box body, and completing mixed pile removing by utilizing a manipulator. Through the mode, the gravity sensing device can be used for acquiring the integral gravity center position of the box body on the tray in real time, and the unstacking and stacking sequence and the stacking position are adjusted, so that the safety of the unstacking and stacking process is ensured; and the 3D vision device is used for identifying the size and the edge of the box body, so that the space utilization rate on the tray is improved, and the stable and efficient mixed pile removing and stacking process is realized.

Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the present invention.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种自动龙门卧式码垛机

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!