Automatic classification garbage can based on visual identification and classification method

文档序号:1401632 发布日期:2020-03-06 浏览:6次 中文

阅读说明:本技术 一种基于视觉识别的自动分类垃圾桶及分类方法 (Automatic classification garbage can based on visual identification and classification method ) 是由 王拓 吴蓬勃 王贵选 刘正波 于 2019-11-28 设计创作,主要内容包括:本发明公开了一种基于视觉识别的自动分类垃圾桶及分类方法。该垃圾桶包括垃圾投放口、第一光电开关传感器、识别分类托盘、第二光电开关传感器、子垃圾桶、图像识别组件、STM32控制器、双路步进电机驱动器和垃圾桶外壳;垃圾桶外壳的侧壁开有垃圾投放口;第一光电开关传感器放置于垃圾投放口处;垃圾桶外壳内放置有若干子垃圾桶,每个子垃圾桶的桶口位置均安装有第二光电开关传感器;识别分类托盘放置于垃圾桶外壳内,位于子垃圾桶的上方;识别分类托盘包括垃圾托盘、V型挡板步进电机、摄像头、V型挡板、支架、旋转挡板步进电机和旋转挡板。本垃圾桶通过摄像头采集垃圾图像,采用TensorFlow深度学习框架,经过模型的迁移训练,提高了垃圾识别的准确率。(The invention discloses an automatic classification garbage can based on visual identification and a classification method. The garbage can comprises a garbage throwing port, a first photoelectric switch sensor, an identification and classification tray, a second photoelectric switch sensor, a sub-garbage can, an image identification assembly, an STM32 controller, a two-way stepping motor driver and a garbage can shell; the side wall of the shell of the garbage can is provided with a garbage throwing opening; the first photoelectric switch sensor is placed at the garbage throwing port; a plurality of sub-garbage cans are arranged in the garbage can shell, and a second photoelectric switch sensor is arranged at the opening of each sub-garbage can; the recognition and classification tray is placed in the garbage can shell and is positioned above the sub garbage cans; the recognition and classification tray comprises a garbage tray, a V-shaped baffle stepping motor, a camera, a V-shaped baffle, a bracket, a rotating baffle stepping motor and a rotating baffle. This garbage bin passes through the camera and gathers rubbish image, adopts the TensorFlow degree of depth learning frame, through the migration training of model, has improved the rate of accuracy of rubbish discernment.)

1. An automatic classification garbage can based on visual identification is characterized by comprising a garbage throwing port, a first photoelectric switch sensor, an identification classification tray, a second photoelectric switch sensor, a sub-garbage can, an image identification assembly, an STM32 controller, a double-path stepping motor driver and a garbage can shell;

the side wall of the garbage can shell is provided with a garbage throwing opening; the first photoelectric switch sensor is arranged at the garbage throwing port and used for detecting whether garbage is thrown into the garbage can or not; a plurality of sub-garbage cans are placed in the garbage can shell, a second photoelectric switch sensor is mounted at the opening of each sub-garbage can and used for detecting the capacity conditions of the sub-garbage cans; the recognition and classification tray is placed in the garbage can shell and is positioned above the sub garbage cans;

the identification and classification tray comprises a garbage tray, a V-shaped baffle stepping motor, a camera, a V-shaped baffle, a bracket, a rotary baffle stepping motor and a rotary baffle; the garbage tray is fixed on the inner wall of the garbage can shell; the camera is arranged on the garbage tray through a bracket, and the shooting visual angle range of the camera covers a garbage recognition area in the garbage tray and is used for collecting garbage images; the V-shaped baffle is placed in the garbage tray, the rotating shaft at the center of the V-shaped baffle is rotatably arranged at the center of the garbage tray, and the edges of two ends of the V-shaped baffle are in contact with the inner wall of the garbage tray; the output end of the V-shaped baffle stepping motor is connected with a rotating shaft of the V-shaped baffle to drive the V-shaped baffle to rotate by taking the center of the garbage tray as a shaft, so that garbage is pushed to the upper part of the corresponding garbage throwing port; the rotating baffle is provided with an opening, and a rotating shaft at the center of the rotating baffle is rotatably arranged at the center of the garbage tray; the output end of the stepping motor of the rotary baffle is connected with a rotating shaft of the rotary baffle, the rotary baffle is driven to rotate by taking the center of the garbage tray as a shaft, and the corresponding garbage throwing port is opened to enable the garbage to fall into the corresponding sub-garbage can;

the first photoelectric switch sensor, the second photoelectric switch sensor and the two-way stepping motor driver are all connected with the STM32 controller; the STM32 controller is connected with the image recognition component; the V-shaped baffle plate stepping motor and the rotary baffle plate stepping motor are both connected with a double-path stepping motor driver; the camera is connected with the image recognition component.

2. The automatic classification garbage can based on visual identification as claimed in claim 1, wherein the identification and classification tray further comprises an annular light supplement lamp and a light sensor; the annular light supplement lamp is arranged on the garbage tray through a bracket and provides proper illumination for the camera to collect garbage images; the light sensor is located in the illumination range of the annular light supplement lamp, detects illumination intensity and sends illumination intensity information to the STM32 controller, and the STM32 controller controls the opening and closing of the annular light supplement lamp and the illumination intensity according to the illumination intensity information and provides proper illumination intensity for the camera to collect garbage images; annular light filling lamp and light sensor all are connected with STM32 controller.

3. The automatic sorting garbage can based on visual recognition of claim 1, wherein the garbage recognition area of the recognition sorting tray is divided into a wet garbage throwing area, a recoverable garbage throwing area, a dry garbage throwing area, a harmful garbage throwing area and a harmful garbage throwing area; the sub-garbage cans are divided into wet garbage cans, recyclable garbage cans, dry garbage cans and harmful garbage cans.

4. A method of sorting refuse according to any of claims 1-3, characterised in that it comprises the following steps:

step one, training an image recognition component;

(1) making an image recognition training set:

① collecting training pictures, collecting pictures of various garbage objects to obtain training pictures;

② screening training pictures, manually screening the training pictures to obtain a training picture set, wherein the pictures in the training picture set contain clear images of objects, have typical characteristics of the objects and diversify object backgrounds;

③ marking training pictures, marking pictures in the training picture set, identifying garbage in the pictures, and outputting the pictures as XML marking files;

④, generating a data set in a TFrecord format, and generating an XML label file into a uniform TFrecord format file by using a program of a target detection library of TensorFlow to obtain the data set in the TFrecord format;

(2) training is started, and the training process is as follows:

① downloading the pre-training model to the object _ detection folder of the object detection library of Tensflow;

②, modifying the object type file, adding or deleting the object type contained in the corresponding layout map file in the object _ detection/data folder according to the actual object type of the garbage;

③ modifying the model configuration file in object _ detection/samples/configurations, modifying the object type quantity according to the actual training garbage type, specifying the file path of the data set in TFRecord format for training and verification, specifying the storage path of the table map file;

④, adjusting the batch size of the pre-training model for each training according to the configuration condition of the computer hardware CPU and the memory, starting the training, and training the pre-training model into a trained model in a Vision Bonnet neural network operation card of the image recognition component;

⑤ converting the trained model into PB model which can run independently through the model file output program of the object detection library of TensorFlow;

⑥ testing the training result, testing the PB model by a testing program of a target detection library of TensorFlow, and increasing the number of the labeled pictures of each object and adjusting the parameter values in the model configuration file to ensure that the recognition rate of the PB model to the pictures is increased to a preset value;

(3) the method specifically comprises the following steps of:

①, configuring the PB model, copying the PB model, the able map file and the model configuration file to a dat a directory of TensorRT;

②, adding support to the newly added PB model, modifying the model and the model configuration file path thereof into the newly added PB model and the model configuration file path thereof under the utilis directory of TensorRT, and realizing the support to the newly added PB model;

opening a camera _ tf _ trt. py file in TensorRT, modifying the default model name into a newly added PB model name, and modifying the able map file path into a newly added able map file path;

④ configuring a serial port to realize recognition result output, modifying the result output part of the visualization. py file under the utils directory of TensorRT, increasing serial port data output, and realizing the serial port output of the recognition result;

step two, after the training is finished, garbage classification is started:

①, garbage is thrown in through the garbage throwing port by a user, when the garbage is thrown in, the first photoelectric switch sensor is triggered, the garbage falls into the identification and classification tray, the trigger signal wakes up the STM32 controller in a dormant state, and the STM32 controller wakes up the image identification component;

② garbage image acquisition, wherein the light sensor detects illumination intensity and sends the illumination intensity information to the STM32 controller, the STM32 controller controls the on-off of the annular light supplement lamp and adjusts the illumination intensity to provide proper illumination intensity for the camera to acquire garbage images;

③ garbage recognition, wherein the camera sends the collected garbage image to a Vision Bonnet neural network operation card of the image recognition component to perform feature recognition on the garbage image, and the image recognition component transmits the recognition result to the STM32 controller;

④, classifying and throwing garbage, STM32 controlling garbage according to the recognition result and controlling the V-shaped baffle to convey the garbage to a corresponding garbage throwing area according to the classification result, meanwhile, detecting the capacity condition of the sub-garbage cans by the STM32 controller through the second photoelectric switch sensor, manually cleaning the corresponding sub-garbage cans if the sub-garbage cans are full, rotating the openings of the rotary baffles to the corresponding garbage throwing openings if the corresponding sub-garbage cans are not full, opening the corresponding garbage throwing openings, dropping the garbage into the sub-garbage cans, and returning the V-shaped baffle and the rotary baffles to the initial positions to finish the garbage classification process.

5. The method of claim 4, wherein in step one, pictures of various spam objects are taken at different times, obtained through a network, and filtered from the MS COCO data set.

6. The method for garbage classification according to claim 5, wherein in the first step, the specific method for screening the images of various garbage objects from the MS COCO data set is as follows: and screening out a corresponding object classification list from the MS COCO data set, and extracting training pictures from the train and val folders of the MS COCO data set according to the classification list.

7. The garbage classification method according to the claim 4, wherein in the second step, the STM32 controller detects the capacity condition of the sub-garbage can through the second photoelectric switch sensor; if the corresponding sub-garbage can is full, a loudspeaker in the STM32 controller sends out corresponding prompt tones to carry out manual cleaning; if the corresponding sub-garbage can is not full, the loudspeaker emits corresponding garbage dumping prompt sound, the opening of the rotary baffle plate rotates to the corresponding garbage throwing opening, the corresponding garbage throwing opening is opened, and the garbage falls into the sub-garbage can; then the STM32 controller and the image recognition component enter a dormant state, and the light supplement lamp is turned off; the garbage classification process is completed.

Technical Field

The invention relates to the field of garbage classification, in particular to an automatic classification garbage can based on visual identification and a classification method.

Background

The garbage amount in China is large, and most of the garbage is recyclable garbage with recycling value, so that the domestic garbage is recycled, and great benefits are brought to the aspects of economy, society, environment and the like. The garbage classification processing system is a complex system and structurally comprises four links of garbage classification collection, garbage classification transportation, garbage classification processing and garbage classification recycling. The garbage classification treatment needs to be carried out from the source, and the automatic garbage classification and collection is one of effective measures for solving the garbage classification problem. At present, sorting of recyclable garbage mainly depends on manpower, belongs to labor intensive industry, and is low in labor efficiency.

The design [ J ] of an intelligent garbage classification barrel for monitoring Internet of things monitoring and controlling of documents, namely broad sources of leaves, bear positive light, Limingshi, old people and the like [ 2017(01):136 and 138 ] garbage detection is used for garbage identification through a capacitive sensor, and garbage classification is realized by detecting dielectric constants of different garbage, so that the defects of difficult sampling, limited identification types and high error rate exist, the garbage identification types are difficult to increase according to actual garbage input, the garbage sorting and input are not realized, meanwhile, the installation position of the capacitive sensor needs manual debugging and cannot adapt to large-scale input, and the practicability is not strong.

Disclosure of Invention

Aiming at the defects of the prior art, the invention aims to provide an automatic classification garbage can based on visual identification and a classification method.

The technical scheme for solving the technical problem of the garbage can is that the invention provides an automatic classification garbage can based on visual identification, which is characterized in that the garbage can comprises a garbage putting port, a first photoelectric switch sensor, an identification classification tray, a second photoelectric switch sensor, a sub-garbage can, an image identification component, an STM32 controller, a two-way stepping motor driver and a garbage can shell;

the side wall of the garbage can shell is provided with a garbage throwing opening; the first photoelectric switch sensor is arranged at the garbage throwing port and used for detecting whether garbage is thrown into the garbage can or not; a plurality of sub-garbage cans are placed in the garbage can shell, a second photoelectric switch sensor is mounted at the opening of each sub-garbage can and used for detecting the capacity conditions of the sub-garbage cans; the recognition and classification tray is placed in the garbage can shell and is positioned above the sub garbage cans;

the identification and classification tray comprises a garbage tray, a V-shaped baffle stepping motor, a camera, a V-shaped baffle, a bracket, a rotary baffle stepping motor and a rotary baffle; the garbage tray is fixed on the inner wall of the garbage can shell; the camera is arranged on the garbage tray through a bracket, and the shooting visual angle range of the camera covers a garbage recognition area in the garbage tray and is used for collecting garbage images; the V-shaped baffle is placed in the garbage tray, the rotating shaft at the center of the V-shaped baffle is rotatably arranged at the center of the garbage tray, and the edges of two ends of the V-shaped baffle are in contact with the inner wall of the garbage tray; the output end of the V-shaped baffle stepping motor is connected with a rotating shaft of the V-shaped baffle to drive the V-shaped baffle to rotate by taking the center of the garbage tray as a shaft, so that garbage is pushed to the upper part of the corresponding garbage throwing port; the rotating baffle is provided with an opening, and a rotating shaft at the center of the rotating baffle is rotatably arranged at the center of the garbage tray; the output end of the stepping motor of the rotary baffle is connected with a rotating shaft of the rotary baffle, the rotary baffle is driven to rotate by taking the center of the garbage tray as a shaft, and the corresponding garbage throwing port is opened to enable the garbage to fall into the corresponding sub-garbage can;

the first photoelectric switch sensor, the second photoelectric switch sensor and the two-way stepping motor driver are all connected with the STM32 controller; the STM32 controller is connected with the image recognition component; the V-shaped baffle plate stepping motor and the rotary baffle plate stepping motor are both connected with a double-path stepping motor driver; the camera is connected with the image recognition component.

The technical scheme for solving the technical problem of the method is to provide a garbage classification method for automatically classifying garbage cans based on visual identification, which is characterized by comprising the following steps of:

step one, training an image recognition component;

(1) making an image recognition training set:

① collecting training pictures, collecting pictures of various garbage objects to obtain training pictures;

② screening training pictures, manually screening the training pictures to obtain a training picture set, wherein the pictures in the training picture set contain clear images of objects, have typical characteristics of the objects and diversify object backgrounds;

③ marking training pictures, marking pictures in the training picture set, identifying garbage in the pictures, and outputting the pictures as XML marking files;

④, generating a data set in a TFrecord format, and generating an XML label file into a uniform TFrecord format file by using a program of a target detection library of TensorFlow to obtain the data set in the TFrecord format;

(2) training is started, and the training process is as follows:

① downloading the pre-training model to the object _ detection folder of the object detection library of Tensflow;

②, modifying the object type file, adding or deleting the object type contained in the corresponding layout map file in the object _ detection/data folder according to the actual object type of the garbage;

③ modifying the model configuration file in object _ detection/samples/configurations, modifying the object type quantity according to the actual training garbage type, specifying the file path of the data set in TFRecord format for training and verification, specifying the storage path of the table map file;

④, adjusting the batch size of the pre-training model for each training according to the configuration condition of the computer hardware CPU and the memory, starting the training, and training the pre-training model into a trained model in a Vision Bonnet neural network operation card of the image recognition component;

⑤ converting the trained model into PB model which can run independently through the model file output program of the object detection library of TensorFlow;

⑥ testing the training result, testing the PB model by a testing program of a target detection library of TensorFlow, and increasing the number of the labeled pictures of each object and adjusting the parameter values in the model configuration file to ensure that the recognition rate of the PB model to the pictures is increased to a preset value;

(3) the method specifically comprises the following steps of:

①, configuring the PB model, copying the PB model, the cable map file and the model configuration file to a data directory of TensorRT;

②, adding support to the newly added PB model, modifying the model and the model configuration file path thereof into the newly added PB model and the model configuration file path thereof under the utilis directory of TensorRT, and realizing the support to the newly added PB model;

opening a camera _ tf _ trt. py file in TensorRT, modifying the default model name into a newly added PB model name, and modifying the able map file path into a newly added able map file path;

④ configuring a serial port to realize recognition result output, modifying the result output part of the visualization. py file under the utils directory of TensorRT, increasing serial port data output, and realizing the serial port output of the recognition result;

step two, after the training is finished, garbage classification is started:

①, garbage is thrown in through the garbage throwing port by a user, when the garbage is thrown in, the first photoelectric switch sensor is triggered, the garbage falls into the identification and classification tray, the trigger signal wakes up the STM32 controller in a dormant state, and the STM32 controller wakes up the image identification component;

② garbage image acquisition, wherein the light sensor detects illumination intensity and sends the illumination intensity information to the STM32 controller, the STM32 controller controls the on-off of the annular light supplement lamp and adjusts the illumination intensity to provide proper illumination intensity for the camera to acquire garbage images;

③ garbage recognition, wherein the camera sends the collected garbage image to a Vision Bonnet neural network operation card of the image recognition component to perform feature recognition on the garbage image, and the image recognition component transmits the recognition result to the STM32 controller;

④, classifying and throwing garbage, STM32 controlling garbage according to the recognition result and controlling the V-shaped baffle to convey the garbage to a corresponding garbage throwing area according to the classification result, meanwhile, detecting the capacity condition of the sub-garbage cans by the STM32 controller through the second photoelectric switch sensor, manually cleaning the corresponding sub-garbage cans if the sub-garbage cans are full, rotating the openings of the rotary baffles to the corresponding garbage throwing openings if the corresponding sub-garbage cans are not full, opening the corresponding garbage throwing openings, dropping the garbage into the sub-garbage cans, and returning the V-shaped baffle and the rotary baffles to the initial positions to finish the garbage classification process.

Compared with the prior art, the invention has the beneficial effects that:

(1) this garbage bin uses the AI technique to rubbish discernment in, uses the degree of depth learning vision classification technique to rubbish letter sorting, gathers rubbish image through the camera, adopts TensorFlow degree of depth learning frame, through the migration training of mobileNet SSD model, has greatly improved the rate of accuracy of rubbish discernment. The STM32 singlechip is handled the identification result, and control step motor has realized the transport and the input of rubbish with rubbish accuracy input corresponding sub-garbage bin, need not artifical the participation, has improved work efficiency and rate of accuracy, has greatly reduced the human cost.

(2) Adopt baffle and step motor cooperation to carry out waste classification and replace artifical input rubbish, improve work efficiency, reduce the running cost, simplify system mechanical structure simultaneously, improve system stability.

(3) Based on the TensorFlow deep learning framework, the garbage classification recognition method realizes learning in work, the recognized garbage types are gradually increased, and the accuracy is gradually improved.

(4) The MoblieNet SSD model which is easier to operate on embedded equipment and has higher recognition speed is adopted, the number of parameters is reduced, the calculation amount is smaller, the performance is higher, and the garbage can be recognized quickly and accurately. The VisionBonnet neural network operation card of Google is adopted to perform neural network operation of image recognition, so that the operation burden of a system CPU is reduced, and the operation efficiency of the system neural network is improved.

(5) The visual identification component adopts open source codes, so that later-stage upgrading and secondary development of a user are facilitated. Training picture sets and picture training volume can be increased by oneself, and rubbish discernment kind and discernment volume can be increased only to the mode through the training later stage, have reduced the running cost.

(6) The position of the camera does not need to be specially debugged, and the large-scale production and putting in are facilitated.

(7) The working scene of this garbage bin is public place, and the resident can realize automatic waste classification after dropping into the garbage bin with rubbish, has improved classification efficiency, has reduced the running cost, realizes waste classification from the source that rubbish was put in, has more practicality and generalizability.

Drawings

FIG. 1 is an exploded view of the overall structure of one embodiment of the present invention;

FIG. 2 is a schematic view of an embodiment of the present invention illustrating the identification and classification of pallet axes;

FIG. 3 is an isometric view of another perspective of an embodiment of the present invention identifying a sorting tray;

FIG. 4 is a schematic top view of an identification sorting tray according to an embodiment of the present invention;

fig. 5 is a recognition rate graph of a model trained using an image dataset according to embodiment 1 of the present invention.

Fig. 6 is a graph of the total loss rate of a model trained using an image dataset according to embodiment 1 of the present invention.

Fig. 7 is a graph showing the recognition effect of the image data set on the model after 11026 times of training in embodiment 1 of the present invention.

In the figure: 1. a garbage throwing port; 2. a first photoelectric switch sensor; 3. identifying a sorting tray; 4. a second photoelectric switch sensor; 5. a sub-dustbin; 6. an image recognition component; 7. an STM32 controller; 8. a two-way stepper motor driver; 9. a trash can shell; 31. a trash tray; 32. a V-shaped baffle stepping motor; 33. a camera; 34. an annular light supplement lamp; 35. a V-shaped baffle (L-shaped baffle); 36. a support; 37. a rotating baffle stepper motor; 38. rotating the baffle; 310. a wet waste deposit area; 311. a wet garbage throwing port; 312. a recyclable waste deposit area; 313. the garbage throwing port can be recovered; 314. a dry garbage throwing port; 315. a dry waste disposal area; 316. a harmful garbage throwing port; 317. a harmful garbage throwing area;

Detailed Description

The present invention will be further described with reference to the following examples and accompanying drawings. The specific examples are only intended to illustrate the invention in further detail and do not limit the scope of protection of the claims of the present application.

The invention provides an automatic classification garbage can (called as a garbage can for short) based on visual identification, which is characterized by comprising a garbage putting opening 1, a first photoelectric switch sensor 2, an identification classification tray 3, a second photoelectric switch sensor 4, a sub garbage can 5, an image identification assembly 6, an STM32 controller 7, a two-way stepping motor driver 8 and a garbage can shell 9, wherein the two-way stepping motor driver is connected with the garbage putting opening 1;

the side wall of the garbage can shell 9 is provided with a garbage throwing opening 1; the first photoelectric switch sensor 2 is arranged at the garbage throwing-in opening 1 and used for detecting whether garbage is thrown into the garbage can or not; a plurality of sub-garbage cans 5 are placed in the garbage can shell 9, a second photoelectric switch sensor 4 is mounted at the opening of each sub-garbage can 5, and the second photoelectric switch sensor 4 is used for detecting the capacity of the sub-garbage cans 5; the image recognition assembly 6, the STM32 controller 7 and the two-way stepping motor driver 8 are arranged in the garbage can shell 9; the recognition and classification tray 3 is placed in the garbage can shell 9, is positioned above the sub garbage can 5 and below the garbage throwing-in opening 1, and is communicated with the garbage throwing-in opening 1;

the recognition and classification tray 3 comprises a garbage tray 31, a V-shaped baffle stepping motor 32, a camera 33, an annular light supplement lamp 34, a V-shaped baffle 35, a bracket 36, a rotary baffle stepping motor 37, a rotary baffle 38 and a light sensor (not shown in the figure); the garbage tray 31 is fixed on the inner wall of the garbage can shell 9; the camera 33 is mounted on the garbage tray 31 through the bracket 36, and the mounting height is determined by the shooting visual angle range of the camera, so that the shooting visual angle range of the camera 33 covers the whole garbage recognition area in the garbage tray 31 and is used for collecting garbage images; the annular light supplement lamp 34 is mounted on the garbage tray 31 through a support 36 and is positioned above the camera 33, so that sufficient illumination is provided for the camera 33 to collect garbage images; the light sensor is arranged on the support 36 and located in the illumination range of the annular light supplement lamp 34, the light sensor detects illumination intensity and sends illumination intensity information to the STM32 controller 7, the STM32 controller 7 controls the opening and closing of the annular light supplement lamp 34 and the illumination intensity according to the illumination intensity information, appropriate illumination intensity is provided for the camera 33 to collect garbage images, and all-weather operation of the garbage can is guaranteed; the V-shaped baffle 35 is placed in the garbage tray 31, the rotating shaft at the center of the V-shaped baffle is rotatably arranged at the center of the garbage tray 31, and the edges of two ends of the V-shaped baffle are in contact with the inner wall of the garbage tray 31; the output end of the V-shaped baffle stepping motor 32 is connected with the rotating shaft of the V-shaped baffle 35 to drive the V-shaped baffle 35 to rotate by taking the center of the garbage tray 31 as the shaft, so that the garbage is pushed to the upper part of the corresponding garbage throwing port; the rotary baffle 38 is positioned outside the garbage tray 31 and on the other side of the V-shaped baffle 35, and is provided with an opening, and a rotating shaft at the center of the rotary baffle is rotatably arranged at the center of the garbage tray 31; the output end of the rotating baffle stepping motor 37 is connected with the rotating shaft of the rotating baffle 38, so as to drive the rotating baffle 38 to rotate by taking the center of the garbage tray 31 as a shaft, and open the corresponding garbage throwing port to enable the garbage to fall into the corresponding sub-garbage can 5;

the first photoelectric switch sensor 2, the annular light supplement lamp 34, the second photoelectric switch sensor 4, the two-way stepping motor driver 8 and the light sensor are all connected with the STM32 controller 7 through signal lines; the STM32 controller 7 is connected with the image recognition component 6 through a serial port line; the V-shaped baffle stepping motor 32 and the rotary baffle stepping motor 37 are both connected with the two-way stepping motor driver 8 through leads; the camera 33 is connected to the image recognition unit 6 via a USB cable.

The sub-garbage cans 5 can be four and are respectively used for containing wet garbage, recyclable garbage, dry garbage and harmful garbage, and the four types of garbage are accurately classified according to the national standard. The trash recognition area of the recognition sorting tray 3 may be divided into a wet trash throwing area 310, a wet trash throwing port 311, a recoverable trash throwing area 312, a recoverable trash throwing port 313, a dry trash throwing port 314, a dry trash throwing area 315, a harmful trash throwing port 316, and a harmful trash throwing area 317;

the image identification component 6 consists of a Vision Bonnet neural network operation card and a Raspberry Pi zeroWH microcomputer; the Vision Bonnet neural network operation card is used for performing neural network operation of image recognition and loading a Mobilen et SSD model for garbage recognition; a Raspberry Pi Zero WH microcomputer is provided with a Wuban diagram system for supporting a TensorFlow deep learning framework and processing a neural network operation result.

The Raspberry Pi Zero WH microcomputer has the following functions: 1. training images are collected. The garbage photos of different scenes, different lighting conditions and various angles are shot. 2. Each image is manually marked. 3. A training data set is prepared to convert the labels to the appropriate format, a label map is made, and the image is divided into a large training set and a smaller evaluation set. 4. The correct parameters are set in the object detection profile. 5. A TensorFlow environment was prepared. 6. Training and generating a model. 7. The mold is cured. 8. The Bonnet Compiler compilation model was used in Ubuntu. 9. The compiled model is loaded onto the Vision Kit. 10. Example code is adapted to work with a compilation model.

The model of the STM32 controller 7 can adopt STM32F103ZET 6;

the invention also provides an automatic classification method (short method) based on visual identification, which is characterized by comprising the following steps:

step one, training an image recognition component 6;

(1) making an image recognition training set:

① collecting training pictures, collecting pictures of various garbage objects shot under different backgrounds, different lights, different angles, different garbage quantities and the like, acquired by tools such as a web crawler and the like and screened from MS COCO data set to obtain the training pictures for improving the accuracy;

preferably, the specific method for screening the pictures of various junk objects from the MS COCO dataset is as follows: screening out a corresponding object classification list from the MS COCO data set, and extracting training pictures from train and val folders of the MS COCO data set according to the classification list;

②, screening training pictures, manually screening the training pictures, and deleting partial unqualified pictures to obtain a training picture set, wherein the pictures in the training picture set must contain clear images of objects, have typical characteristics of the objects and diversify object backgrounds;

③, labeling the training pictures, adopting LabLeImg software to label the pictures in the training picture set, namely identifying the garbage in the pictures, and outputting an XML labeling file in a Pascal VOC format by the software;

④, generating a data set in a TFRecord format, generating an XML annotation file into a unified TFRecord format file by using a program create _ past _ tf _ record.py of an object detection library (Objectdetection API) of TensorFlow to obtain the data set in the TFRecord format, and finishing the production of an image recognition training set;

(2) training is started, and the training process is as follows:

① downloading MobileNet SSD pre-training model, downloading the pre-training model SSD _ MobileNet to object _ detection folder of object detection library of TensorFlow;

②, modifying the object type file, adding or deleting the object type contained in the corresponding label map file (mapping file of object label number and object name) in the object _ detection/data folder according to the actual object type of the garbage;

③ modifies object _ detection/samples/configurations model configuration file ssd _ mobilene _ v1_ coco. configuration, namely modifying object type number num _ classes according to actual training garbage type, specifying file path input _ path of training and verifying TFRecord format data set (namely training and verifying TFRecord file path), specifying LABLE map file storage path;

④, adjusting the batch size of the pre-training model for each training according to the configuration condition of the computer hardware CPU and the memory, starting the training, and training the pre-training model into a trained model in a Vision Bonnet neural network operation card of the image recognition component 6;

⑤ converting the trained model into a PB model (PB, protocol buffer), outputting a program export _ reference _ graph through a model file of a target detection library of Tensflow, and converting the trained model into a PB model (garbage recognition model) capable of running independently;

⑥ testing the training result, testing the trained PB model through object _ detection _ configuration.py of a target detection library of TensorFlow, and increasing the number of the labeled pictures of each object (generally, each object is not less than 1000) and adjusting the parameter value in the model configuration file ssd _ mobilene _ v1_ coco.config to improve the recognition rate of the PB model to the pictures to a preset value;

(3) the specific method/steps of the operating environment configuration of the image recognition component 6 are as follows:

① configuring PB model, copying PB model, able map file and model configuration file ssd _ mobilenet _ v1_ coco.config to data directory of TensorRT;

②, adding support to a newly added PB model, and modifying the model and the model configuration file path thereof into the newly added PB model and the model configuration file path thereof in the egowings _ models.

Opening a camera _ tf _ trt. py file in TensorRT, modifying the default model name into a newly added PB model name, and modifying the able map file path into a newly added able map file path;

④ configuring a serial port to realize recognition result output, modifying the result output part of the visualization. py file under the utils directory of TensorRT, increasing serial port data output, and realizing the serial port output of the recognition result;

step two, after the training is finished, starting garbage classification based on vision:

①, putting garbage in, wherein the image recognition component 6 and the STM32 controller 7 are in a dormant state for reducing power consumption when no garbage is put in, a user puts in the garbage through the garbage putting port 1, the first photoelectric switch sensor 2 is triggered when the garbage is put in, the garbage falls into the V-shaped baffle 35 of the recognition and classification tray 3, the STM32 controller 7 in the dormant state is awakened by the trigger signal, and the STM32 controller 7 awakens the image recognition component 6;

② garbage image acquisition, wherein the light sensor detects illumination intensity and sends the illumination intensity information to the STM32 controller 7, the STM32 controller 7 controls the on-off of the annular light supplement lamp 34 and adjusts the illumination intensity to provide proper illumination intensity for the camera 33 to acquire garbage images, and the camera 33 acquires garbage images;

③, identifying garbage, sending the acquired garbage image to a VisionBonnet neural network operation card of the image identification assembly 6 by the camera 33, identifying the characteristics of the garbage image through neural network operation, and transmitting the identification result to the STM32 controller 7 by the image identification assembly 6 through a serial port;

④, classifying and throwing garbage, STM32 controller 7 classifying the garbage according to the recognition result ("wet garbage", "recoverable garbage", "dry garbage", "harmful garbage") and controlling the V-shaped baffle 35 to convey the garbage to the corresponding garbage throwing area according to the classification result, meanwhile, STM32 controller 7 detects the capacity condition of the sub-garbage can 5 through the second photoelectric switch sensor 4, if the corresponding sub-garbage can 5 is full, the speaker in the STM32 controller 7 sends a prompt sound of "garbage can is full and timely cleaning is requested" to perform manual cleaning, if the corresponding sub-garbage can 5 is not full, the speaker sends a corresponding garbage dumping prompt sound, the opening of the rotary baffle 38 rotates to the corresponding garbage throwing area, the corresponding garbage throwing opening is opened, the garbage falls into the sub-garbage can 5, the V-shaped baffle 35 and the rotary baffle 38 return to the initial positions (in the embodiment, the V-shaped baffle 35 returns to the wet garbage throwing area 310, the opening of the rotary baffle 38 returns to the throwing area 317, then the STM32 controller 7 and the dormant image recognition module 34 and completes the classification process.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:视觉与触觉相结合的智能垃圾分类方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!