Personnel behavior monitoring emergency disposal method based on machine vision

文档序号:1963987 发布日期:2021-12-14 浏览:24次 中文

阅读说明:本技术 一种基于机器视觉的人员行为监控应急处置方法 (Personnel behavior monitoring emergency disposal method based on machine vision ) 是由 刘江涛 张小栋 边佳帅 张文昊 于 2021-11-17 设计创作,主要内容包括:本发明涉及一种基于机器视觉的人员行为监控应急处置方法,涉及建筑施工安全技术领域,本发明基于机器视觉的人员行为监控应急处置方法,包括:视频处理模块获取视觉检测装置拍摄的施工现场的若干视频数据,并对施工现场视频数据进行分析处理,生成训练数据和验证数据;视频处理模块将分析处理完成的训练数据输入训练模块进行模型训练,并在训练结束时,将验证数据输入训练模块对模型进行验证;将验证完成的模型嵌入施工现场的视觉检测装置内对施工现场和施工人员的行为进行监测;事件处理模块根据视觉检测装置和模型的输出结果向安保人员发送对应的预警信息,提高了数据处理的效率,从而提高了施工现场的安全性。(The invention relates to a personnel behavior monitoring emergency disposal method based on machine vision, relating to the technical field of building construction safety, and the personnel behavior monitoring emergency disposal method based on machine vision comprises the following steps: the video processing module acquires a plurality of video data of a construction site shot by the visual detection device, analyzes and processes the video data of the construction site, and generates training data and verification data; the video processing module inputs the training data after the analysis and the processing into the training module for model training, and inputs the verification data into the training module for model verification when the training is finished; embedding the verified model into a visual detection device of a construction site to monitor the behaviors of the construction site and constructors; the event processing module sends corresponding early warning information to security personnel according to the output results of the visual detection device and the model, so that the data processing efficiency is improved, and the safety of a construction site is improved.)

1. A personnel behavior monitoring emergency disposal method based on machine vision is characterized by comprising the following steps:

step S1, the video processing module acquires a plurality of video data of the construction site shot by the visual detection device, analyzes and processes the video data of the construction site, and generates training data and verification data;

step S2, the video processing module inputs the training data after the analysis and processing into the training module for model training, and inputs the verification data into the training module for model verification when the training is finished;

s3, embedding the verified model into a visual detection device of a construction site to monitor the behaviors of the construction site and constructors;

step S4, the event processing module sends corresponding early warning information to security personnel according to the output results of the visual detection device and the model;

in step S2, when a model is trained, the input unit of the training module inputs a construction equipment feature image of the training data as the input of the model, correspondingly outputs a risk coefficient of the construction equipment feature image as the output of the model, inputs a constructor feature image of the training data as the input of the model, correspondingly performs model training with a supporter wearing pass rate of the constructor feature image as the output of the model, and ends the training when the training reaches a preset number of iterations;

in the step S3, when the verification of the model is completed, the comparison unit of the training module obtains an actual fitting wearing pass rate E and an actual risk coefficient F that are output when the verification data is input every time the model is completed, and determines whether the single verification is completed according to a comparison result between the actual fitting wearing pass rate E and the fitting wearing pass rate E0 of the verification data and a comparison result between the actual risk coefficient F and the risk coefficient F0 corresponding to the verification data, when the model is verified to a preset verification number of times C0, obtains the model verification pass rate P, determines whether the model reaches the standard according to the verification pass rate P, and trains the model again after adjusting the preset iteration number of times of the model when the model is determined not to reach the standard.

2. The machine vision-based personnel behavior monitoring emergency disposal method according to claim 1, wherein in the step S1, the video processing module performing analysis processing on the construction site video data comprises:

step S11, the obtaining unit obtains a plurality of video data of the construction site;

step S12, the processing unit divides a plurality of construction site videos into a plurality of frames of construction site images, divides the construction site images into non-construction images and construction images, and divides the non-construction images and the construction images into training data and verification data according to a preset proportion B;

step S13, the analysis unit analyzes the images which are not constructed and constructed in the divided training data, and extracts the characteristic images of the construction equipment in the images which are not constructed and the characteristic images of the constructors in the images which are constructed;

step S14, the analysis unit analyzes the equipment risk coefficient of the construction equipment characteristic image and the brace fitting qualification rate of the constructor characteristic image, and the output unit outputs the construction equipment characteristic image and the equipment risk coefficient F corresponding thereto and the constructor characteristic image and the brace fitting qualification rate E corresponding thereto to the training module.

3. Machine vision based personnel behavior monitoring emergency treatment method according to claim 2, wherein, in step S2, when training the model, the training module sets the initial size of the convolution kernel of the model to a, the initial number to R, the initial number to D, the initial step size to λ, an input unit of the training module respectively inputs the construction equipment characteristic image and the risk coefficient of the construction equipment characteristic image and the protector wearing qualification rate of the constructor characteristic image and the constructor wearing qualification rate of the constructor characteristic image in training data into the model for iterative training, and when the training is carried out to the preset iteration times G, stopping the training, respectively inputting the construction equipment characteristic image and the constructor characteristic image in the verification data into the model for verification, and obtaining a verification result.

4. The machine vision based emergency treatment method for personnel behavior monitoring according to claim 3, wherein when the model is verified, the comparison module compares the model output actual risk coefficient F with the risk coefficient F0 in the verification data, and compares the model output actual brace fitting pass rate E with the brace fitting pass rate E0 and the risk coefficient F0 in the verification data, if the model output actual brace fitting pass rate E is consistent with the brace fitting pass rate E0 in the verification data and the model output actual risk coefficient F is consistent with the risk coefficient F0 in the verification data, the verification is determined to be up to standard, if the model output brace fitting pass rate E is inconsistent with the brace fitting pass rate E0 and/or the model output actual risk coefficient F is inconsistent with the risk coefficient F0, judging that the verification does not reach the standard, obtaining the standard reaching rate P of the model verification by the analysis module when the model is verified to the preset times C0, comparing the standard reaching rate with the preset standard reaching rate P0, setting P = C/C0, wherein C is the verification standard reaching times,

if P is larger than or equal to P0, the comparison unit judges that the model training reaches the standard;

if P is less than P0, the comparison unit judges that the model training does not reach the standard, and the adjustment unit of the training module adjusts the iteration number.

5. The machine vision-based emergency handling method for personnel behavior monitoring according to claim 4, wherein when the adjustment unit adjusts the iteration number G, the comparison unit calculates a difference Δ P between the achievement rate P and a preset achievement rate P0, the adjustment unit selects a corresponding adjustment coefficient for the iteration number according to the comparison result between the difference Δ P and the preset achievement rate difference to adjust the preset iteration number, and trains the model again when the adjustment is completed,

wherein the adjusting unit is further provided with a first preset achievement rate difference value delta P1, a second preset achievement rate difference value delta P2, a third preset achievement rate difference value delta P3, a first iteration number adjusting coefficient K1, a second iteration number adjusting coefficient K2 and a third iteration number adjusting coefficient K3, wherein delta P1 is less than delta P2 is less than delta P3, 1 is more than K1 is more than K2 is more than K3 is less than 2,

when the delta P is less than or equal to the delta P1, the adjusting unit selects a first iteration number adjusting coefficient K1 to adjust the preset iteration number;

when the delta P is more than the delta P1 and less than or equal to the delta P2, the adjusting unit selects a second iteration number adjusting coefficient K2 to adjust the preset iteration number;

when the number of the iterations is more than delta P2 and less than or equal to delta P3, the adjusting unit selects a third iteration number adjusting coefficient K3 to adjust the preset iteration number;

when the adjusting unit selects the ith iteration number adjusting coefficient Ki to adjust the preset iteration number, the adjusting unit sets the adjusted preset iteration number as G1, and sets G1= GxKi.

6. The machine vision-based emergency disposal method for personnel behavior monitoring according to claim 5, wherein when the model is verified again, if P < P0, the comparison unit obtains the verified actual tool wear pass rate E and the actual risk coefficient F, compares the actual tool wear pass rate E with the tool wear pass rate E0 corresponding to the verification data, compares the actual risk coefficient F with the risk coefficient F0 corresponding to the verification data, and determines whether the training is up to standard according to the comparison result,

if E is larger than or equal to E0 and F is larger than or equal to F0, the analysis unit judges that the model training reaches the standard;

if E < E0 and/or F < F0, the analysis unit determines that the model training did not meet the standard.

7. The machine vision based emergency treatment method for personnel behavior monitoring according to claim 6, wherein when the comparison unit determines that the model training does not reach the standard and E < E0, the comparison unit calculates the yield difference Δ E between the actual tool fitting yield E and the tool fitting yield E0 in the verification data, sets Δ E = E0-E, and the adjustment unit selects the corresponding convolution kernel size adjustment coefficient according to the yield difference to adjust the convolution kernel size and trains the model again when the adjustment is completed,

wherein the adjusting unit is also provided with a first preset qualified rate difference delta E1, a second preset qualified rate difference delta E2, a third preset qualified rate difference delta E3, a first convolution kernel size adjusting coefficient Ka1, a second convolution kernel size adjusting coefficient Ka2 and a third convolution kernel size adjusting coefficient Ka3, wherein delta E1 < [ delta ] E2 < [ delta ] E3, 1 < Ka1 < Ka2 < Ka3 < 2 is set,

when the delta E is less than or equal to the delta E1, the adjusting unit selects a first convolution kernel size adjusting coefficient Ka1 to adjust the size of the convolution kernel;

when the delta E1 is more than the delta E and less than or equal to the delta E2, the adjusting unit selects a second convolution kernel size adjusting coefficient Ka2 to adjust the size of the convolution kernel;

when the delta E2 is more than the delta E and less than or equal to the delta E3, the adjusting unit selects a third convolution kernel size adjusting coefficient Ka3 to adjust the size of the convolution kernel;

when the adjusting unit selects the jth convolution kernel size adjusting coefficient Kaj to adjust the size of the convolution kernel, j =1, 2, 3 is set, and the adjusting unit sets the adjusted convolution kernel size to be a1 and sets a1= a × Kaj.

8. The machine vision-based emergency disposal method for personnel behavior monitoring based on machine vision as claimed in claim 7, wherein when the comparison unit determines that the model training does not reach the standard and F is less than F0, the analysis unit calculates the risk coefficient difference Δ F between the actual risk coefficient F and the risk coefficient F0 in the verification data, sets Δ F = F0-F, the adjustment unit selects the corresponding adjustment coefficient of the size of the convolution kernel to adjust the size of the convolution kernel according to the comparison result of the risk coefficient difference and the preset risk coefficient difference, and trains the model again when the adjustment is completed,

wherein the adjusting unit is further provided with a first preset risk coefficient difference DeltaF 1, a second preset risk coefficient difference DeltaF 2, a third preset risk coefficient difference DeltaF 3, a fourth convolution kernel size adjusting coefficient Ka4, a fifth convolution kernel size adjusting coefficient Ka5 and a sixth convolution kernel size adjusting coefficient Ka6, wherein DeltaF 1 < DeltaF 2 < DeltaF 3, 1 < Ka4 < Ka5 < Ka6 < 1.5 are set,

when the delta F is less than or equal to the delta F1, the adjusting unit selects a fourth convolution kernel size adjusting coefficient Ka4 to adjust the size of the convolution kernel;

when the delta F is more than delta F1 and less than or equal to delta F2, the adjusting unit selects a fifth convolution kernel size adjusting coefficient Ka5 to adjust the size of the convolution kernel;

when the delta F is more than delta F2 and less than or equal to delta F3, the adjusting unit selects a sixth convolution kernel size adjusting coefficient Ka6 to adjust the size of the convolution kernel;

when the adjusting unit selects the u-th convolution kernel size adjusting coefficient Kau to adjust the size of the convolution kernel, u =4, 5, 6 is set, and the adjusting module sets the adjusted size of the convolution kernel to be a2 and sets a2= a × Kau.

9. The machine vision-based emergency treatment method for personnel behavior monitoring according to claim 8, wherein when the analysis module determines that the model training does not meet the standard, E < E0, and F < F0, the adjustment unit calculates A3 sum of a size a1 of the adjusted convolution kernel corresponding to the actual suit-on pass rate and an adjusted convolution kernel size a2 corresponding to the actual risk coefficient, sets A3= a1+ a2, and sets the size of the convolution kernel of the model to A3;

when the adjusting unit adjusts the size of the convolution kernel to Az, and the model training and verification are completed, z =1, 2, 3 is set, the analysis module determines that the model training does not reach the standard, the analysis unit takes the sum of the actual fitting wearing qualification rate E and the actual risk coefficient F as the actual risk value M of the output of the model, takes the sum of the fitting wearing qualification rate E0 corresponding to the verification data and the risk coefficient F for the verification data pair as the preset risk value M0, the analysis unit takes the risk value difference DeltaM between the actual risk value M and the preset risk value M0 as the preset risk value M = M0-M, and the adjustment unit selects the corresponding iteration number correction coefficient to correct the preset iteration number according to the comparison result of the risk value difference and the color number risk value difference,

wherein the adjusting unit is further provided with a first preset risk value difference delta M1, a second preset risk value difference delta M2, a third preset risk value difference delta M3, a first iteration number correction coefficient X1, a second iteration number correction coefficient X2 and a third iteration number correction coefficient X3, wherein delta M1 < [ delta ] M2 < [ delta ] M3, 1 < X1 < X2 < X3 < 2 is set,

when the delta M is less than or equal to the delta M1, the adjusting unit selects a first iteration number correction coefficient X1 to correct the preset iteration number;

when the delta M is more than the delta M1 and less than or equal to the delta M2, the adjusting unit selects a second iteration correction coefficient X2 to correct the preset iteration;

when the delta M is more than the delta M2 and less than or equal to the delta M3, the adjusting unit selects a third iteration correction coefficient X3 to correct the preset iteration;

when the adjusting unit selects the e-th iteration number correction coefficient Xe to correct the preset iteration number, e =1, 2, 3 is set, and the adjusting unit sets the corrected preset iteration number to G2 and sets G2= G1 XXe.

10. The machine vision-based emergency disposal method for personnel behavior monitoring based on the machine vision is characterized in that a maximum value Amax of a convolution kernel is further arranged in the adjusting module, when the adjusting unit adjusts the size of the convolution kernel of the model to be A3, the adjusting unit compares the size A3 of the convolution kernel with the maximum value Amax of the convolution kernel, and if A3 is greater than Amax, the adjusting unit judges that the size of the convolution kernel is unqualified; a3 is less than or equal to Amax, and the adjusting unit judges that the size of the convolution kernel is qualified;

when the adjusting unit judges that the size of the convolution kernel is unqualified, the adjusting unit calculates the convolution kernel size difference value delta A of the adjusted convolution kernel size A3 and the convolution kernel maximum value Amax, sets delta A = A3-Amax, selects a corresponding number adjusting coefficient according to the convolution kernel size difference value and a preset convolution kernel size difference value to adjust the number of the convolution kernels,

wherein the first preset convolution kernel size difference Δ A1, the second preset convolution kernel size difference Δ A2, the third preset convolution kernel size difference Δ A3, the first number adjustment coefficient W1, the second number adjustment coefficient W2 and the third number adjustment coefficient W3, wherein Δ A1 < [ delta ] A2 < [ delta ] A3, 1 < W1 < W2 < W3 < 2 is set,

when the delta A is less than or equal to the delta A1, the adjusting unit selects a first quantity adjusting coefficient W1 to adjust the quantity of the convolution kernels;

when the number of the convolution kernels is more than delta A1 and less than or equal to delta A2, the adjusting unit selects a second number adjusting coefficient W2 to adjust the number of the convolution kernels;

when the number of the convolution kernels is more than delta A2 and less than or equal to delta A3, the adjusting unit selects a third number adjusting coefficient W3 to adjust the number of the convolution kernels;

when the adjusting unit selects the nth number of adjusting coefficients Wn to adjust the number of the convolution kernels, n =1, 2, 3 is set, and the adjusting unit sets the number of the adjusted convolution kernels to be R1 and sets R1= R × Wn.

Technical Field

The invention relates to the technical field of building construction safety, in particular to a personnel behavior monitoring emergency disposal method based on machine vision.

Background

In the current society, the coverage of cameras in cities is quite wide, social hazards and problems are pre-warned and disposed through the cameras, most of the methods are divided into two types, one type is that security personnel observe whether a hazard event or an accident occurs or not by watching a camera video, and then perform manual intervention and disposition, and the method is mostly applied to communities or office buildings and is monitored and monitored by security personnel for 24 hours. The other method is to arrange a large number of cameras, trace the situation of occurrence time by calling video monitoring after a hazard event occurs, and perform delayed treatment according to information obtained from the video.

In the first method, a security worker continuously watches video monitoring to give an early warning, a large amount of labor cost is consumed, the mental concentration of the worker is limited, the worker cannot look at the screen at any moment, and the number of cameras is limited through manual monitoring because the worker cannot observe the situation of a plurality of videos at the same time. The second method of calling and monitoring afterwards has great hysteresis, can only trace responsibility or trace to the source, and cannot play a role in early warning or prevention and treatment in time.

In project sites, safety issues have always been of paramount importance, whether workers wear orderly labor support equipment in the site as prescribed, and whether construction work tools can ensure safety.

Disclosure of Invention

Therefore, the invention provides a personnel behavior monitoring emergency disposal method based on machine vision, which is used for solving the problem of low construction safety caused by the fact that safety problems occurring in a construction site cannot be timely treated in the prior art.

In order to achieve the above object, the present invention provides a personnel behavior monitoring emergency disposal method based on machine vision, which includes:

step S1, the video processing module acquires a plurality of video data of the construction site shot by the visual detection device, analyzes and processes the video data of the construction site, and generates training data and verification data;

step S2, the video processing module inputs the training data after the analysis and processing into the training module for model training, and inputs the verification data into the training module for model verification when the training is finished;

s3, embedding the verified model into a visual detection device of a construction site to monitor the behaviors of the construction site and constructors;

step S4, the event processing module sends corresponding early warning information to security personnel according to the output results of the visual detection device and the model;

in step S2, when a model is trained, the input unit of the training module inputs a construction equipment feature image of the training data as the input of the model, correspondingly outputs a risk coefficient of the construction equipment feature image as the output of the model, inputs a constructor feature image of the training data as the input of the model, correspondingly performs model training with a supporter wearing pass rate of the constructor feature image as the output of the model, and ends the training when the training reaches a preset number of iterations;

in the step S3, when the verification of the model is completed, the comparison unit of the training module obtains an actual fitting wearing pass rate E and an actual risk coefficient F that are output when the verification data is input every time the model is completed, and determines whether the single verification is completed according to a comparison result between the actual fitting wearing pass rate E and the fitting wearing pass rate E0 of the verification data and a comparison result between the actual risk coefficient F and the risk coefficient F0 corresponding to the verification data, when the model is verified to a preset verification number of times C0, obtains the model verification pass rate P, determines whether the model reaches the standard according to the verification pass rate P, and trains the model again after adjusting the preset iteration number of times of the model when the model is determined not to reach the standard.

Further, in step S1, the analyzing and processing the construction site video data by the video processing module includes:

step S11, the obtaining unit obtains a plurality of video data of the construction site;

step S12, the processing unit divides a plurality of construction site videos into a plurality of frames of construction site images, divides the construction site images into non-construction images and construction images, and divides the non-construction images and the construction images into training data and verification data according to a preset proportion B;

step S13, the analysis unit analyzes the images which are not constructed and constructed in the divided training data, and extracts the characteristic images of the construction equipment in the images which are not constructed and the characteristic images of the constructors in the images which are constructed;

step S14, the analysis unit analyzes the equipment risk coefficient of the construction equipment characteristic image and the brace fitting qualification rate of the constructor characteristic image, and the output unit outputs the construction equipment characteristic image and the equipment risk coefficient F corresponding thereto and the constructor characteristic image and the brace fitting qualification rate E corresponding thereto to the training module.

Further, in step S2, when the model is trained, the training module sets an initial size of a convolution kernel of the model to a, an initial number of the convolution kernel is set to R, an initial number of channels is set to D, an initial step length is set to λ, an input unit of the training module inputs a risk coefficient of a construction equipment feature image and a construction equipment feature image in training data and a fitting wearing pass rate of a constructor feature image and a constructor feature image in the training data to the model respectively for iterative training, and when the training reaches a preset iteration number G, stops the training, inputs a construction equipment feature image and a constructor feature image in verification data to the model respectively for verification, and obtains a verification result.

Further, when the model is verified, the comparison module compares the model output actual risk coefficient F with the risk coefficient F0 in the verification data, compares the model output actual suit-wearing pass rate E with the suit-wearing pass rate E0 in the verification data, determines that the verification is up to standard if the model output actual suit-wearing pass rate E is consistent with the suit-wearing pass rate E0 in the verification data and the model output actual risk coefficient F is consistent with the risk coefficient F0 in the verification data, determines that the verification is not up to standard if the model output suit-wearing pass rate E is inconsistent with the suit-wearing pass rate E0 and/or the model output actual risk coefficient F is inconsistent with the risk coefficient F0, and when the model is verified to a preset number of times C0, the analysis module determines that the verification is not up to standard, obtaining the model verification standard-reaching rate P, comparing the standard-reaching rate with a preset standard-reaching rate P0, setting P = C/C0, wherein C is the verification standard-reaching times,

if P is larger than or equal to P0, the comparison unit judges that the model training reaches the standard;

if P is less than P0, the comparison unit judges that the model training does not reach the standard, and the adjustment unit of the training module adjusts the iteration number.

Further, when the adjusting unit adjusts the iteration number G, the comparing unit calculates a difference value DeltaP between the standard reaching rate P and a preset standard reaching rate P0, the adjusting unit selects a corresponding iteration number adjusting coefficient according to the comparison result of the difference value between the standard reaching rate and the preset standard reaching rate to adjust the preset iteration number, and trains the model again when the adjustment is completed,

wherein the adjusting unit is further provided with a first preset achievement rate difference value delta P1, a second preset achievement rate difference value delta P2, a third preset achievement rate difference value delta P3, a first iteration number adjusting coefficient K1, a second iteration number adjusting coefficient K2 and a third iteration number adjusting coefficient K3, wherein delta P1 is less than delta P2 is less than delta P3, 1 is more than K1 is more than K2 is more than K3 is less than 2,

when the delta P is less than or equal to the delta P1, the adjusting unit selects a first iteration number adjusting coefficient K1 to adjust the preset iteration number;

when the delta P is more than the delta P1 and less than or equal to the delta P2, the adjusting unit selects a second iteration number adjusting coefficient K2 to adjust the preset iteration number;

when the number of the iterations is more than delta P2 and less than or equal to delta P3, the adjusting unit selects a third iteration number adjusting coefficient K3 to adjust the preset iteration number;

when the adjusting unit selects the ith iteration number adjusting coefficient Ki to adjust the preset iteration number, the adjusting unit sets the adjusted preset iteration number as G1, and sets G1= GxKi.

Further, when the model is verified again, if P is less than P0, the comparison unit obtains the verified actual fitting yield E and the actual risk coefficient F, compares the actual fitting yield E with the fitting yield E0 corresponding to the verification data, compares the actual risk coefficient F with the risk coefficient F0 corresponding to the verification data, and determines whether the training is up to standard according to the comparison result,

if E is larger than or equal to E0 and F is larger than or equal to F0, the analysis unit judges that the model training reaches the standard;

if E < E0 and/or F < F0, the analysis unit determines that the model training did not meet the standard.

Further, when the comparison unit judges that the model training does not reach the standard and E is less than E0, a yield difference value Delta E between the actual suit wearing yield E and the suit wearing yield E0 in the verification data is calculated, Delta E = E0-E is set, the adjusting unit selects a corresponding convolution kernel size adjusting coefficient according to the yield difference value to adjust the size of the convolution kernel, and the model is trained again when the adjustment is completed,

wherein the adjusting unit is also provided with a first preset qualified rate difference delta E1, a second preset qualified rate difference delta E2, a third preset qualified rate difference delta E3, a first convolution kernel size adjusting coefficient Ka1, a second convolution kernel size adjusting coefficient Ka2 and a third convolution kernel size adjusting coefficient Ka3, wherein delta E1 < [ delta ] E2 < [ delta ] E3, 1 < Ka1 < Ka2 < Ka3 < 2 is set,

when the delta E is less than or equal to the delta E1, the adjusting unit selects a first convolution kernel size adjusting coefficient Ka1 to adjust the size of the convolution kernel;

when the delta E1 is more than the delta E and less than or equal to the delta E2, the adjusting unit selects a second convolution kernel size adjusting coefficient Ka2 to adjust the size of the convolution kernel;

when the delta E2 is more than the delta E and less than or equal to the delta E3, the adjusting unit selects a third convolution kernel size adjusting coefficient Ka3 to adjust the size of the convolution kernel;

when the adjusting unit selects the jth convolution kernel size adjusting coefficient Kaj to adjust the size of the convolution kernel, j =1, 2, 3 is set, and the adjusting unit sets the adjusted convolution kernel size to be a1 and sets a1= a × Kaj.

Further, when the comparison unit judges that the model training does not reach the standard and F is less than F0, the analysis unit calculates a risk coefficient difference value deltaF between the actual risk coefficient F and a risk coefficient F0 in the verification data, sets deltaF = F0-F, the adjustment unit selects a corresponding convolution kernel size adjustment coefficient to adjust the size of the convolution kernel according to the comparison result of the risk coefficient difference value and a preset risk coefficient difference value, and trains the model again when the adjustment is completed,

wherein the adjusting unit is further provided with a first preset risk coefficient difference DeltaF 1, a second preset risk coefficient difference DeltaF 2, a third preset risk coefficient difference DeltaF 3, a fourth convolution kernel size adjusting coefficient Ka4, a fifth convolution kernel size adjusting coefficient Ka5 and a sixth convolution kernel size adjusting coefficient Ka6, wherein DeltaF 1 < DeltaF 2 < DeltaF 3, 1 < Ka4 < Ka5 < Ka6 < 1.5 are set,

when the delta F is less than or equal to the delta F1, the adjusting unit selects a fourth convolution kernel size adjusting coefficient Ka4 to adjust the size of the convolution kernel;

when the delta F is more than delta F1 and less than or equal to delta F2, the adjusting unit selects a fifth convolution kernel size adjusting coefficient Ka5 to adjust the size of the convolution kernel;

when the delta F is more than delta F2 and less than or equal to delta F3, the adjusting unit selects a sixth convolution kernel size adjusting coefficient Ka6 to adjust the size of the convolution kernel;

when the adjusting unit selects the u-th convolution kernel size adjusting coefficient Kau to adjust the size of the convolution kernel, u =4, 5, 6 is set, and the adjusting module sets the adjusted size of the convolution kernel to be a2 and sets a2= a × Kau.

Further, when the analysis module determines that the model training does not meet the standard, E < E0, and F < F0, the adjustment unit calculates A3 that is the sum of a1, which is the size of the adjusted convolution kernel corresponding to the actual brace fitting qualification rate, and a2, which is the size of the adjusted convolution kernel corresponding to the actual risk coefficient, and sets A3= a1+ a2, the adjustment unit sets the size of the convolution kernel of the model to A3;

when the adjusting unit adjusts the size of the convolution kernel to Az, and the model training and verification are completed, z =1, 2, 3 is set, the analysis module determines that the model training does not reach the standard, the analysis unit takes the sum of the actual fitting wearing qualification rate E and the actual risk coefficient F as the actual risk value M of the output of the model, takes the sum of the fitting wearing qualification rate E0 corresponding to the verification data and the risk coefficient F for the verification data pair as the preset risk value M0, the analysis unit takes the risk value difference DeltaM between the actual risk value M and the preset risk value M0 as the preset risk value M = M0-M, and the adjustment unit selects the corresponding iteration number correction coefficient to correct the preset iteration number according to the comparison result of the risk value difference and the color number risk value difference,

wherein the adjusting unit is further provided with a first preset risk value difference delta M1, a second preset risk value difference delta M2, a third preset risk value difference delta M3, a first iteration number correction coefficient X1, a second iteration number correction coefficient X2 and a third iteration number correction coefficient X3, wherein delta M1 < [ delta ] M2 < [ delta ] M3, 1 < X1 < X2 < X3 < 2 is set,

when the delta M is less than or equal to the delta M1, the adjusting unit selects a first iteration number correction coefficient X1 to correct the preset iteration number;

when the delta M is more than the delta M1 and less than or equal to the delta M2, the adjusting unit selects a second iteration correction coefficient X2 to correct the preset iteration;

when the delta M is more than the delta M2 and less than or equal to the delta M3, the adjusting unit selects a third iteration correction coefficient X3 to correct the preset iteration;

when the adjusting unit selects the e-th iteration number correction coefficient Xe to correct the preset iteration number, e =1, 2, 3 is set, and the adjusting unit sets the corrected preset iteration number to G2 and sets G2= G1 XXe.

Further, a maximum value Amax of a convolution kernel is also set in the adjusting module, when the adjusting unit adjusts the size of the convolution kernel of the model to be A3, the adjusting unit compares the size A3 of the convolution kernel with the maximum value Amax of the convolution kernel, and if A3 is greater than Amax, the adjusting unit judges that the size of the convolution kernel is unqualified; a3 is less than or equal to Amax, and the adjusting unit judges that the size of the convolution kernel is qualified;

when the adjusting unit judges that the size of the convolution kernel is unqualified, the adjusting unit calculates the convolution kernel size difference value delta A of the adjusted convolution kernel size A3 and the convolution kernel maximum value Amax, sets delta A = A3-Amax, selects a corresponding number adjusting coefficient according to the convolution kernel size difference value and a preset convolution kernel size difference value to adjust the number of the convolution kernels,

wherein the first preset convolution kernel size difference Δ A1, the second preset convolution kernel size difference Δ A2, the third preset convolution kernel size difference Δ A3, the first number adjustment coefficient W1, the second number adjustment coefficient W2 and the third number adjustment coefficient W3, wherein Δ A1 < [ delta ] A2 < [ delta ] A3, 1 < W1 < W2 < W3 < 2 is set,

when the delta A is less than or equal to the delta A1, the adjusting unit selects a first quantity adjusting coefficient W1 to adjust the quantity of the convolution kernels;

when the number of the convolution kernels is more than delta A1 and less than or equal to delta A2, the adjusting unit selects a second number adjusting coefficient W2 to adjust the number of the convolution kernels;

when the number of the convolution kernels is more than delta A2 and less than or equal to delta A3, the adjusting unit selects a third number adjusting coefficient W3 to adjust the number of the convolution kernels;

when the adjusting unit selects the nth number of adjusting coefficients Wn to adjust the number of the convolution kernels, n =1, 2, 3 is set, and the adjusting unit sets the number of the adjusted convolution kernels to be R1 and sets R1= R × Wn.

Compared with the prior art, the method has the advantages that the video processing module is used for processing the video of the construction site to generate training data and verification data for training and verifying the deep neural network model, the training data is used for training the model when the training data and the verification data are generated, the model is verified through the verification data, when the model is verified for multiple times to reach the standard, the model is used for monitoring and analyzing the construction site risk detected by the visual detection device and whether the wearing of the protective clothing of the constructor is qualified, and the event processing module is used for sending early warning information to the security personnel according to the construction site risk coefficient and the wearing failure of the protective clothing of the constructor, so that the data quantity during data processing is reduced, the data processing efficiency is improved, and the safety of the construction site is improved.

In particular, when the model is trained, the construction equipment characteristic image and the constructor characteristic image are extracted, the risk coefficient of the construction equipment characteristic image and the protector wearing qualification rate of the constructor characteristic image are correspondingly analyzed, and the risk coefficient and the protector wearing qualification rate are used as the output of the model when the model is trained, so that the model accuracy is improved, and the safety of a construction site is further improved.

Particularly, when the model is verified, whether single verification is finished or not is judged according to the actual risk coefficient output by the model during verification and the comparison result of the actual protective clothing wearing qualification rate and the risk coefficient of verification data and the protective clothing wearing qualification rate, and when the verification reaches the preset times and the results of the verification meet the standard, the model training result is judged to reach the standard, so that the accuracy of the model is further improved, and the safety of a construction site is further improved.

Furthermore, the construction site video shot by the visual detection device is processed into a plurality of frames of construction site images through the processing unit of the video processing module, the construction site images are processed and divided into training data and verification data which can be used for training and verifying the model, meanwhile, the construction equipment characteristic image of the training data is used as the input of the model, the risk coefficient corresponding to the construction equipment characteristic image is used as the output of the model, the constructor characteristic image is used as the input of the model, and the constructor characteristic image is used as the output of the model to train the model, so that the accuracy of the model is further improved, and the safety of the construction site is further improved.

Further, when the model is verified to a preset number of times, the verification standard-reaching rate of the model is obtained, whether the model training reaches the standard is judged according to the comparison result of the verification standard-reaching rate and the preset standard-reaching rate, the standard-reaching rate difference value of the standard-reaching rate and the preset standard-reaching rate is calculated when the model training does not reach the standard, and the preset iteration number of times of the model is adjusted according to the standard-reaching rate difference value, so that the model accuracy is further improved, and the safety of a construction site is further improved.

Furthermore, a plurality of preset standard-reaching rate difference values and iteration number adjusting coefficients are set in the adjusting unit, and when the preset iteration number is judged to be adjusted, the corresponding adjusting coefficient is selected according to the comparison result of the standard-reaching rate difference values and the preset standard-reaching rate difference values to adjust the iteration number, so that the training cost in model adjustment is saved, and the accuracy of model adjustment is further improved.

Further, when the model is adjusted and trained according to the adjusted iteration number and is verified, whether the model is up to standard or not is determined according to the comparison result of the actual fitting pass rate output by the model and the fitting pass rate corresponding to the verification data and the comparison result of the actual risk coefficient output by the model and the risk coefficient corresponding to the verification data, and when the model is not up to standard, the size of the convolution kernel of the model and/or the number of the convolution kernels are/is adjusted, so that the accuracy of the model is further improved, and the safety of a construction site is further improved.

Drawings

FIG. 1 is a flow chart of a method for monitoring emergency disposal based on machine vision for personnel behavior according to the present invention;

fig. 2 is a video data processing flow chart of the emergency disposal method for monitoring personnel behavior based on machine vision according to the present invention.

Detailed Description

In order that the objects and advantages of the invention will be more clearly understood, the invention is further described below with reference to examples; it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and do not limit the scope of the present invention.

Referring to fig. 1, a flowchart of a personnel behavior monitoring emergency disposal method based on machine vision according to the present invention is shown, and the personnel behavior monitoring emergency disposal method based on machine vision according to the present invention includes:

step S1, the video processing module acquires a plurality of video data of the construction site shot by the visual detection device, analyzes and processes the video data of the construction site, and generates training data and verification data;

step S2, the video processing module inputs the training data after the analysis and processing into the training module for model training, and inputs the verification data into the training module for model verification when the training is finished;

s3, embedding the verified model into a visual detection device of a construction site to monitor the behaviors of the construction site and constructors;

step S4, the event processing module sends corresponding early warning information to security personnel according to the output results of the visual detection device and the model;

in step S2, when a model is trained, the input unit of the training module inputs a construction equipment feature image of the training data as the input of the model, correspondingly outputs a risk coefficient of the construction equipment feature image as the output of the model, inputs a constructor feature image of the training data as the input of the model, correspondingly performs model training with a supporter wearing pass rate of the constructor feature image as the output of the model, and ends the training when the training reaches a preset number of iterations;

in the step S3, when the verification of the model is completed, the comparison unit of the training module obtains an actual fitting wearing pass rate E and an actual risk coefficient F that are output when the verification data is input every time the model is completed, and determines whether the single verification is completed according to a comparison result between the actual fitting wearing pass rate E and the fitting wearing pass rate E0 of the verification data and a comparison result between the actual risk coefficient F and the risk coefficient F0 corresponding to the verification data, when the model is verified to a preset verification number of times C0, obtains the model verification pass rate P, determines whether the model reaches the standard according to the verification pass rate P, and trains the model again after adjusting the preset iteration number of times of the model when the model is determined not to reach the standard.

Fig. 2 is a video data processing flow chart of the emergency disposal method for monitoring personnel behavior based on machine vision according to the present invention.

In the emergency disposal method for monitoring personnel behaviors based on machine vision, in the step S1, the analyzing and processing of the construction site video data by the video processing module includes:

step S11, the obtaining unit obtains a plurality of video data of the construction site;

step S12, the processing unit divides a plurality of construction site videos into a plurality of frames of construction site images, divides the construction site images into non-construction images and construction images, and divides the non-construction images and the construction images into training data and verification data according to a preset proportion B;

step S13, the analysis unit analyzes the images which are not constructed and constructed in the divided training data, and extracts the characteristic images of the construction equipment in the images which are not constructed and the characteristic images of the constructors in the images which are constructed;

step S14, the analysis unit analyzes the equipment risk coefficient of the construction equipment characteristic image and the brace fitting qualification rate of the constructor characteristic image, and the output unit outputs the construction equipment characteristic image and the equipment risk coefficient F corresponding thereto and the constructor characteristic image and the brace fitting qualification rate E corresponding thereto to the training module.

Specifically, in step S2, when the model is trained, the training module sets an initial size of a convolution kernel of the model to a, an initial number of the convolution kernel to R, an initial number of channels to D, and an initial step size to λ, the input unit of the training module inputs a risk coefficient of a construction equipment feature image and a construction equipment feature image in training data and a protector wearing pass rate of a constructor feature image and a constructor feature image in the training data to the model respectively for iterative training, and when the training is performed to a preset iteration number G, stops the training, inputs a construction equipment feature image and a constructor feature image in verification data to the model respectively for verification, and obtains a verification result.

In the embodiment of the invention, the model is a Faster R-CNN model, the initial size of the convolution kernel is 3, the initial number is 2, the initial channel number is 3, and the initial step length is 2.

Specifically, in step S4, the event processing module sends the time, the location, and the screenshot detected by the visual detection device to the security personnel to verify the output rate of wear of the brace, the risk coefficient, and the risk value of the model, and prompts the security personnel to perform processing.

Specifically, when a model is trained, the input module takes the construction equipment characteristic image as input of the model, correspondingly takes the risk coefficient of the construction equipment characteristic image as output of the model, and takes the analysis module takes the constructor characteristic image as input of the model, correspondingly takes the protector wearing pass rate of the constructor characteristic image as output of the model.

Specifically, when the model is verified, the comparison module compares the model output actual risk coefficient F with the risk coefficient F0 in the verification data, compares the model output actual suit-on pass rate E with the suit-on pass rate E0 in the verification data, determines that the verification is up to standard if the model output actual suit-on pass rate E with the suit-on pass rate E0 in the verification data and the model output actual risk coefficient F with the risk coefficient F0 in the verification data, determines that the verification is not up to standard if the model output suit-on pass rate E with the suit-on pass rate E0 and/or the model output actual risk coefficient F with the risk coefficient F0 are not consistent, and when the model is verified to the number of times preset C0, the analysis module determines that the verification is not up to standard, obtaining the model verification standard-reaching rate P, comparing the standard-reaching rate with a preset standard-reaching rate P0, setting P = C/C0, wherein C is the verification standard-reaching times,

if P is larger than or equal to P0, the comparison unit judges that the model training reaches the standard;

if P is less than P0, the comparison unit judges that the model training does not reach the standard, and the adjustment unit of the training module adjusts the iteration number.

Specifically, when the adjusting unit adjusts the iteration number G, the comparing unit calculates a difference value Δ P between the achievement rate P and a preset achievement rate P0, the adjusting unit selects a corresponding adjustment coefficient of the iteration number according to the comparison result of the difference value between the achievement rate and the preset achievement rate to adjust the preset iteration number, and trains the model again when the adjustment is completed,

wherein the adjusting unit is further provided with a first preset achievement rate difference value delta P1, a second preset achievement rate difference value delta P2, a third preset achievement rate difference value delta P3, a first iteration number adjusting coefficient K1, a second iteration number adjusting coefficient K2 and a third iteration number adjusting coefficient K3, wherein delta P1 is less than delta P2 is less than delta P3, 1 is more than K1 is more than K2 is more than K3 is less than 2,

when the delta P is less than or equal to the delta P1, the adjusting unit selects a first iteration number adjusting coefficient K1 to adjust the preset iteration number;

when the delta P is more than the delta P1 and less than or equal to the delta P2, the adjusting unit selects a second iteration number adjusting coefficient K2 to adjust the preset iteration number;

when the number of the iterations is more than delta P2 and less than or equal to delta P3, the adjusting unit selects a third iteration number adjusting coefficient K3 to adjust the preset iteration number;

when the adjusting unit selects the ith iteration number adjusting coefficient Ki to adjust the preset iteration number, the adjusting unit sets the adjusted preset iteration number as G1, and sets G1= GxKi.

Specifically, when the model is verified again, if P is less than P0, the comparison unit obtains the verified actual fitting yield E and the actual risk coefficient F, compares the actual fitting yield E with the fitting yield E0 corresponding to the verification data, compares the actual risk coefficient F with the risk coefficient F0 corresponding to the verification data, and determines whether the training is up to standard according to the comparison result,

if E is larger than or equal to E0 and F is larger than or equal to F0, the analysis unit judges that the model training reaches the standard;

if E < E0 and/or F < F0, the analysis unit determines that the model training did not meet the standard.

Specifically, when the comparison unit determines that the model training does not reach the standard and E is less than E0, a yield difference Δ E between the actual suit wearing yield E and the suit wearing yield E0 in the verification data is calculated, Δ E = E0-E is set, the adjustment unit selects a corresponding convolution kernel size adjustment coefficient according to the yield difference to adjust the size of the convolution kernel, and the model is trained again when the adjustment is completed,

wherein the adjusting unit is also provided with a first preset qualified rate difference delta E1, a second preset qualified rate difference delta E2, a third preset qualified rate difference delta E3, a first convolution kernel size adjusting coefficient Ka1, a second convolution kernel size adjusting coefficient Ka2 and a third convolution kernel size adjusting coefficient Ka3, wherein delta E1 < [ delta ] E2 < [ delta ] E3, 1 < Ka1 < Ka2 < Ka3 < 2 is set,

when the delta E is less than or equal to the delta E1, the adjusting unit selects a first convolution kernel size adjusting coefficient Ka1 to adjust the size of the convolution kernel;

when the delta E1 is more than the delta E and less than or equal to the delta E2, the adjusting unit selects a second convolution kernel size adjusting coefficient Ka2 to adjust the size of the convolution kernel;

when the delta E2 is more than the delta E and less than or equal to the delta E3, the adjusting unit selects a third convolution kernel size adjusting coefficient Ka3 to adjust the size of the convolution kernel;

when the adjusting unit selects the jth convolution kernel size adjusting coefficient Kaj to adjust the size of the convolution kernel, j =1, 2, 3 is set, and the adjusting unit sets the adjusted convolution kernel size to be a1 and sets a1= a × Kaj.

Specifically, when the comparison unit determines that the model training does not reach the standard and F is less than F0, the analysis unit calculates a risk coefficient difference value deltaF between the actual risk coefficient F and a risk coefficient F0 in the verification data, sets deltaF = F0-F, the adjustment unit selects a corresponding convolution kernel size adjustment coefficient to adjust the size of the convolution kernel according to the comparison result between the risk coefficient difference value and a preset risk coefficient difference value, and trains the model again when the adjustment is completed,

wherein the adjusting unit is further provided with a first preset risk coefficient difference DeltaF 1, a second preset risk coefficient difference DeltaF 2, a third preset risk coefficient difference DeltaF 3, a fourth convolution kernel size adjusting coefficient Ka4, a fifth convolution kernel size adjusting coefficient Ka5 and a sixth convolution kernel size adjusting coefficient Ka6, wherein DeltaF 1 < DeltaF 2 < DeltaF 3, 1 < Ka4 < Ka5 < Ka6 < 1.5 are set,

when the delta F is less than or equal to the delta F1, the adjusting unit selects a fourth convolution kernel size adjusting coefficient Ka4 to adjust the size of the convolution kernel;

when the delta F is more than delta F1 and less than or equal to delta F2, the adjusting unit selects a fifth convolution kernel size adjusting coefficient Ka5 to adjust the size of the convolution kernel;

when the delta F is more than delta F2 and less than or equal to delta F3, the adjusting unit selects a sixth convolution kernel size adjusting coefficient Ka6 to adjust the size of the convolution kernel;

when the adjusting unit selects the u-th convolution kernel size adjusting coefficient Kau to adjust the size of the convolution kernel, u =4, 5, 6 is set, and the adjusting module sets the adjusted size of the convolution kernel to be a2 and sets a2= a × Kau.

Specifically, when the analysis module determines that the model training does not meet the standard, E < E0, and F < F0, the adjustment unit calculates A3 which is the sum of a1, which is the size of the adjusted convolution kernel corresponding to the actual brace fitting qualification rate, and a2, which is the size of the adjusted convolution kernel corresponding to the actual risk coefficient, and sets A3= a1+ a2, and the adjustment unit sets the size of the convolution kernel of the model to A3;

when the adjusting unit adjusts the size of the convolution kernel to Az, and the model training and verification are completed, z =1, 2, 3 is set, the analysis module determines that the model training does not reach the standard, the analysis unit takes the sum of the actual fitting wearing qualification rate E and the actual risk coefficient F as the actual risk value M of the output of the model, takes the sum of the fitting wearing qualification rate E0 corresponding to the verification data and the risk coefficient F for the verification data pair as the preset risk value M0, the analysis unit takes the risk value difference DeltaM between the actual risk value M and the preset risk value M0 as the preset risk value M = M0-M, and the adjustment unit selects the corresponding iteration number correction coefficient to correct the preset iteration number according to the comparison result of the risk value difference and the color number risk value difference,

wherein the adjusting unit is further provided with a first preset risk value difference delta M1, a second preset risk value difference delta M2, a third preset risk value difference delta M3, a first iteration number correction coefficient X1, a second iteration number correction coefficient X2 and a third iteration number correction coefficient X3, wherein delta M1 < [ delta ] M2 < [ delta ] M3, 1 < X1 < X2 < X3 < 2 is set,

when the delta M is less than or equal to the delta M1, the adjusting unit selects a first iteration number correction coefficient X1 to correct the preset iteration number;

when the delta M is more than the delta M1 and less than or equal to the delta M2, the adjusting unit selects a second iteration correction coefficient X2 to correct the preset iteration;

when the delta M is more than the delta M2 and less than or equal to the delta M3, the adjusting unit selects a third iteration correction coefficient X3 to correct the preset iteration;

when the adjusting unit selects the e-th iteration number correction coefficient Xe to correct the preset iteration number, e =1, 2, 3 is set, and the adjusting unit sets the corrected preset iteration number to G2 and sets G2= G1 XXe.

Specifically, the adjusting module is further provided with a maximum convolution kernel value Amax, when the adjusting unit adjusts the size of the convolution kernel of the model to be A3, the adjusting unit compares the size A3 of the convolution kernel with the maximum convolution kernel value Amax, and if A3 is greater than Amax, the adjusting unit judges that the size of the convolution kernel is not qualified; a3 is less than or equal to Amax, and the adjusting unit judges that the size of the convolution kernel is qualified;

when the adjusting unit judges that the size of the convolution kernel is unqualified, the adjusting unit calculates the convolution kernel size difference value delta A of the adjusted convolution kernel size A3 and the convolution kernel maximum value Amax, sets delta A = A3-Amax, selects a corresponding number adjusting coefficient according to the convolution kernel size difference value and a preset convolution kernel size difference value to adjust the number of the convolution kernels,

wherein the first preset convolution kernel size difference Δ A1, the second preset convolution kernel size difference Δ A2, the third preset convolution kernel size difference Δ A3, the first number adjustment coefficient W1, the second number adjustment coefficient W2 and the third number adjustment coefficient W3, wherein Δ A1 < [ delta ] A2 < [ delta ] A3, 1 < W1 < W2 < W3 < 2 is set,

when the delta A is less than or equal to the delta A1, the adjusting unit selects a first quantity adjusting coefficient W1 to adjust the quantity of the convolution kernels;

when the number of the convolution kernels is more than delta A1 and less than or equal to delta A2, the adjusting unit selects a second number adjusting coefficient W2 to adjust the number of the convolution kernels;

when the number of the convolution kernels is more than delta A2 and less than or equal to delta A3, the adjusting unit selects a third number adjusting coefficient W3 to adjust the number of the convolution kernels;

when the adjusting unit selects the nth number of adjusting coefficients Wn to adjust the number of the convolution kernels, n =1, 2, 3 is set, and the adjusting unit sets the number of the adjusted convolution kernels to be R1 and sets R1= R × Wn.

So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention; various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种飞机飞行载荷确定方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类