Fatigue driving detection method based on bioelectricity and behavior characteristic fusion

文档序号:1451395 发布日期:2020-02-21 浏览:18次 中文

阅读说明:本技术 一种基于生物电及行为特征融合的疲劳驾驶检测方法 (Fatigue driving detection method based on bioelectricity and behavior characteristic fusion ) 是由 程忱 王恁 郭慧利 李瑶 张佳 高晋 郭浩 于 2019-10-31 设计创作,主要内容包括:本发明公开了一种基于生物电及行为特征融合的疲劳驾驶检测方法,可以实现对驾驶员驾驶状态的实时检测,包括脑电信号处理模块、面部多特征模块、疲劳检测融合模块。所述脑电信号处理模块即通过小波包分解采集δ、θ、α、β四种脑电波,采用(α+θ)/β作为疲劳指数,并计算重构后的脑电样本熵;所述面部多特征模块为采用边缘检测与聚类算法实现眼睛与嘴部区域的识别,通过分析眨眼指标、眼动指标以及哈欠指标来判断驾驶员状态;所述疲劳检测融合模块是将各特征信号通过LVQ-BP人工神经网络进行融合,根据融合结果判断驾驶员是否疲劳。将多种特征相融合确保了系统的可靠性与高效性,更有效地降低事故发生概率,保障人民的生命财产安全。(The invention discloses a fatigue driving detection method based on bioelectricity and behavior characteristic fusion, which can realize real-time detection of the driving state of a driver and comprises an electroencephalogram signal processing module, a facial multi-characteristic module and a fatigue detection fusion module, wherein the electroencephalogram signal processing module acquires four brain waves of delta, theta, α and β through wavelet packet decomposition, adopts (α + theta)/β as a fatigue index and calculates the entropy of a reconstructed electroencephalogram sample, the facial multi-characteristic module realizes the identification of eyes and a mouth region by adopting an edge detection and clustering algorithm and judges the state of the driver by analyzing a blink index, an eye movement index and a yawning index, the fatigue detection fusion module fuses all characteristic signals through an LVQ-artificial neural network and judges whether the driver is tired according to a fusion result.)

1. A fatigue driving detection method based on bioelectricity and behavior feature fusion is characterized by comprising the following steps:

step S1: collecting original electroencephalogram signals, concentration degree and relaxation degree by using a TGAM module, and then decomposing the collected electroencephalogram signals by using wavelet packets;

step S2: reconstructing the FP1 electrode brain waves after wavelet packet decomposition to obtain a group of brain electrical signals; further carrying out sample entropy calculation;

step S3: analyzing the ratio of the concentration degree to the relaxation degree by using a correlation coefficient analysis method;

step S4, further calculating a time-dependent change curve of the power spectral density ratio of (theta + α)/β, and analyzing the curve to judge a fatigue interval;

step S5: carrying out gray processing on the video image of the driver by adopting a weighted average method;

step S6: preprocessing the image by using a machine learning classifier, performing point tracking by using a KLT algorithm, detecting the image by using a WiOLa-Jones target detection algorithm, and marking the face area of the driver by using a rectangular frame;

step S7: determining the position of the eyes by using a proper threshold value, namely finding the edges of the eye areas through an image histogram and a color histogram by using a Soble edge detection algorithm; identifying a mouth region by using a K-means clustering algorithm;

step S8: the working method for judging whether the driver is tired or not by the facial multi-feature module according to the blinking state of the driver comprises the following steps: when a driver is in a fatigue state, the closing time of eyes can be prolonged, and the fatigue identification of the eyes is carried out by adopting a percentage K value of the closing time of the eyes in unit time;

step S9: the working method of the facial multi-feature module for judging whether the driver is tired or not according to the yawning state of the driver comprises the following steps: detecting and positioning the mouth of the human face, and performing yawning judgment by judging the aspect ratio P value of the mouth;

step S10: through the research on the jumping amplitude of the eye focus position and the related information of the eye jumping duration, the change of the watching direction and the watching time is analyzed, and the driving state of a driver can be detected;

step S11: the fatigue monitoring module adopts an LVQ-BP artificial neural network to fuse all characteristic indexes to form a comprehensive fatigue index; classifying each characteristic index through an LVQ neural network, then performing multi-characteristic fusion on the classified indexes based on a BP neural network to form a final comprehensive index, and establishing a fatigue driving detection system based on the LVQ-BP neural network to realize real-time detection on the driving state of a driver; when the comprehensive fatigue index exceeds the fatigue threshold value in the driving process of the driver, judging the fatigue of the driver, sending out early warning, and reminding the driver to stop for rest as soon as possible;

in the step S1, firstly, a TGAM module is used to collect original electroencephalogram signals, concentration and relaxation, and then the collected electroencephalogram signals are decomposed by wavelet packet; the method not only decomposes the low-frequency approximate part of the signal, but also decomposes the high-frequency detail part of the signal, and can retain the characteristics of the original signal to a greater extent; the decomposed signal is more real, the essence of wavelet transformation is inherited, meanwhile, the defect of wavelet transformation is made up, and the accuracy of electroencephalogram signal analysis is improved; in the present invention, forehead electrode FP1, which is closely related to the degree of fatigue, is selected for explanation;

according to an energy time domain calculation formula:

E=∑t|f(t)|2(1);

wherein E is an energy value, t is time, f (t) is a change curve corresponding to theta, α, energy values of theta, α can be obtained, energy changes of a plurality of rhythms are comprehensively analyzed, a fatigue index (α + theta)/β is obtained, and the ratio can be further designed as an early warning value for real-time fatigue driving detection.

2. The fatigue driving detection algorithm based on fusion of multiple biosignals according to claim 1, wherein in step S2, a group of electroencephalogram signals is obtained after reconstructing the FP1 electrode electroencephalogram waves after decomposing the wavelet packet; sample entropy is expressed using SampEn (m, r, N), where the selection parameter m is 2, the similarity margin r is 0.2SD, and the sample size N is 1000;

calculating the sample entropy every 10 seconds to obtain an average sample entropy sequence, and analyzing the sample entropy sequence to know that the sample entropy values of different driving states have certain difference, wherein the sample entropy values of the non-fatigue driving state are concentrated between 0.6 and 0.9, and the sample entropy values of the fatigue driving state are concentrated between 0.3 and 0.6, namely the sample entropy of the non-fatigue driving state is higher than the sample entropy of the fatigue driving state.

3. The method for detecting fatigue driving based on the fusion of bioelectrical and behavioral characteristics according to claim 1, wherein in step S3, the ratio of concentration degree to relaxation degree is analyzed by correlation coefficient analysis; there is a correlation between concentration and relaxation over the same time period; therefore, a correlation coefficient analysis method is introduced, the ratio of the concentration degree to the relaxation degree is analyzed and observed, and the interval of the ratio represents the fatigue degree.

4. The method for detecting fatigue driving based on fusion of bioelectrical and behavioral characteristics according to claim 1, wherein in step S4, according to specific values of FP1 electrode θ wave, Low _ α wave, High _ α wave, Low _ β wave and High _ β wave, a power spectral density curve (θ + α)/β is obtained by processing:

wherein α ═ 2 (Low _ α + High _ α), β ═ 2 (Low _ β + High _ β);

and further calculating the change curve of the power spectral density ratio of (theta + α)/β along with time to obtain the fatigue index interval threshold value.

5. The method for detecting fatigue driving based on bioelectricity and behavioral characteristic fusion as claimed in claim 1, wherein in step S5, the image is grayed by weighted average method, and the obtained grayscale image is binarized, so that the image is simplified, the data is reduced, and the contour of the object of interest is highlighted.

6. The fatigue driving detection method based on bioelectricity and behavior feature fusion as claimed in claim 1, wherein in step S6, the image is first preprocessed by using a machine learning classifier, the contrast of the image is enhanced after graying and binarization, unnecessary training data function can be eliminated by using the machine learning classifier, and the image is placed on key training data;

then, grid marking is carried out on the face area, edge information is extracted by using a Canny algorithm, a face region of interest is identified, and the face region is detected and marked.

7. The fatigue driving detection method based on the fusion of bioelectrical and behavioral characteristics according to claim 1, characterized in that in step S7, the positions of the eyes are determined by using appropriate threshold values, i.e. the edges of the eye regions are found by image histogram and color histogram using Soble edge detection algorithm; the Soble edge detector uses two masks, one vertical and one horizontal, and extends the Soble edge detection mask to 5 x 5 dimensions; identifying a mouth region by using a K-means clustering algorithm; k-means performs image segmentation using an iterative algorithm that minimizes the sum of the distances from each object to its cluster centroid over all clusters.

8. The method as claimed in claim 1, wherein in step S8, the working method of the facial multi-feature module for determining whether the driver is tired according to the blink status of the driver is as follows: when a driver is in a fatigue state, the closing time of eyes can be prolonged, the percentage K value of the closing time of the eyes in unit time is adopted for eye fatigue identification, and the larger the value is, the deeper the driving fatigue degree is; the formula is as follows:

Figure 1

9. the fatigue driving detection method based on the fusion of bioelectricity and behavior characteristics as claimed in claim 1, wherein in step S9, the working method of the facial multi-feature module determining whether the driver is fatigued according to the yawning state of the driver is as follows: based on the detection and the positioning of the mouth of the human face, the coordinates of the mouth are determined by adopting the following formula, wherein xfaceAnd yfaceRespectively representing the coordinate value of the lower left origin, W, of the target detection frame detected based on the machine learning algorithmfaceAnd HfaceThe width and the height of the target detection frame are respectively; (x)0,y0) Representing the coordinates of the upper left corner of the rectangular frame of the mouth, WmouthAnd HmouthWidth and height of the rectangular frame of the human mouth area are respectively represented:

Figure FDA0002256374970000032

the width-to-height ratio of the mouth is adopted for yawning judgment, and the following formula is adopted:

Figure FDA0002256374970000041

when the mouth is in a normal state, the value of P is obviously greater than 1; when the device is in a yawning state, the P value obviously becomes smaller gradually until the P value is less than 1; identifying the fatigue state of the mouth by calculating the frequency that the P value is greater than or equal to 1 within a certain time, the formula is as follows:

Figure 2

10. the method for detecting fatigue driving based on fusion of bioelectricity and behavior characteristics as claimed in claim 1, wherein in step S10, the working method of the facial multi-feature module determining whether the driver is fatigued according to the eye movement state of the driver is as follows: by researching the eye jump amplitude of the eye focus position and the related information of the eye jump duration, the change of the gazing direction and the gazing time is analyzed, and the driving state of the driver can be detected; fatigue is judged by the following formula, wherein S1Number of image frames indicating that the gaze direction deviates from the normal range:

Figure 3

Technical Field

The invention relates to the technical field of bioelectricity signal processing technology, image processing technology and automobile auxiliary driving, in particular to a fatigue driving detection method based on bioelectricity and behavior characteristic fusion.

Background

In recent years, with rapid global economy development, automobile holding capacity continues to increase, the number of traffic accidents increases year by year, and life and property losses are disastrous. Research shows that fatigue driving is an important factor causing traffic accidents, when a driver is tired, the physiological function of a human body is reduced, the consciousness is fuzzy, the accident probability is rapidly increased, and the life and property safety of people is seriously threatened.

At present, the detection of fatigue driving is mostly based on vehicle tracks, driver behaviors or fatigue judgment from a single physiological index, and the detection has certain defects, is limited by individual driving habits, driving speeds, road environments and operation skills, is inaccurate in detection results, is easy to cause the situations of misjudgment or missed judgment, and affects the robustness of detection.

Based on the above, in order to overcome the existing problems, avoid the occurrence of traffic accidents and guarantee the life and property safety of people, the invention provides a fatigue driving real-time detection method based on the fusion of bioelectricity and driver behavior characteristics, so that a comprehensive fatigue index is formed, and the fatigue judgment accuracy is obviously improved compared with that of single bioelectricity characteristic detection and single behavior characteristic detection.

Disclosure of Invention

In order to solve the problems in the prior art, the invention provides a fatigue driving detection method based on bioelectricity and behavior feature fusion, which mainly aims at the defects in the prior art and takes multi-feature signal indexes into consideration, so that the driving state of a driver can be detected in real time.

The technical scheme adopted by the invention for solving the technical problem is as follows: a fatigue driving detection method based on bioelectricity and behavior feature fusion. The method comprises the following steps: electroencephalogram signal processing, facial multi-feature processing and fatigue detection;

the electroencephalogram signal processing comprises the steps of collecting delta, theta, α and β brain waves through wavelet packet decomposition, adopting (α + theta)/β as fatigue indexes, and calculating reconstructed electroencephalogram sample entropy;

the face multi-feature processing judges the state of the driver by analyzing blink, yawning, nodding and eye movement;

the fatigue monitoring is used for judging whether the driver is tired or not according to the fusion result of the signal characteristics;

a fatigue driving detection method based on bioelectricity and behavior feature fusion is specifically carried out according to the following steps:

step S1: collecting original electroencephalogram signals, concentration degree and relaxation degree by using a TGAM module, and then decomposing the collected electroencephalogram signals by using wavelet packets;

step S2: reconstructing the FP1 electrode brain waves after wavelet packet decomposition to obtain a group of brain electrical signals; further carrying out sample entropy calculation;

step S3: analyzing the ratio of the concentration degree to the relaxation degree by using a correlation coefficient analysis method;

step S4, further calculating a time-dependent change curve of the power spectral density ratio of (theta + α)/β, and analyzing the curve to judge a fatigue interval;

step S5: carrying out gray processing on the video image of the driver by adopting a weighted average method;

step S6: preprocessing the image by using a machine learning classifier, performing point tracking by using a KLT algorithm, detecting the image by using a WiOLa-Jones target detection algorithm, and marking the face area of the driver by using a rectangular frame;

step S7: determining the position of the eyes by using a proper threshold value, namely finding the edges of the eye areas through an image histogram and a color histogram by using a Soble edge detection algorithm; identifying a mouth region by using a K-means clustering algorithm;

step S8: the working method for judging whether the driver is tired or not by the facial multi-feature module according to the blinking state of the driver comprises the following steps: when a driver is in a fatigue state, the closing time of eyes can be prolonged, and the fatigue identification of the eyes is carried out by adopting a percentage K value of the closing time of the eyes in unit time;

step S9: the working method of the facial multi-feature module for judging whether the driver is tired or not according to the yawning state of the driver comprises the following steps: detecting and positioning the mouth of the human face, and performing yawning judgment by judging the aspect ratio P value of the mouth;

step S10: through the research on the jumping amplitude of the eye focus position and the related information of the eye jumping duration, the change of the watching direction and the watching time is analyzed, and the driving state of a driver can be detected;

step S11: according to the scheme, the fatigue monitoring module adopts the LVQ-BP artificial neural network to fuse all characteristic indexes to form a comprehensive fatigue index, and when the comprehensive fatigue index reaches a fatigue threshold value, the system can timely send out early warning and remind a driver to stop and rest as soon as possible.

Further, in step S1, the TGAM module is first used to collect the original electroencephalogram signal, concentration degree and relaxation degree, and then the collected electroencephalogram signal is decomposed by wavelet packet; the method not only decomposes the low-frequency approximate part of the signal, but also decomposes the high-frequency detail part of the signal, and can retain the characteristics of the original signal to a greater extent; the decomposed signal is more real, the essence of wavelet transformation is inherited, meanwhile, the defect of wavelet transformation is made up, and the accuracy of electroencephalogram signal analysis is improved; in the present invention, forehead electrode FP1, which is closely related to the degree of fatigue, is selected for explanation;

according to an energy time domain calculation formula:

E=∑t|f(t)|2(1);

wherein E is an energy value, t is time, f (t) is a change curve corresponding to theta, α, energy values of theta, α can be obtained, energy changes of a plurality of rhythms are comprehensively analyzed, a fatigue index (α + theta)/β is obtained, and the ratio can be further designed as an early warning value for real-time fatigue driving detection.

Further, in the step S2, reconstructing the FP1 electrode brain waves after wavelet packet decomposition to obtain a group of brain electrical signals; sample entropy is expressed using sampen (m, r, N), where the selection parameter m is 2, the similarity margin r is 0.2SD, and the sample size N is 1000;

calculating the sample entropy every 10 seconds to obtain an average sample entropy sequence, and analyzing the sample entropy sequence to know that the sample entropy values of different driving states have certain difference, wherein the sample entropy values of the non-fatigue driving state are concentrated between 0.6 and 0.9, and the sample entropy values of the fatigue driving state are concentrated between 0.3 and 0.6, namely the sample entropy of the non-fatigue driving state is higher than the sample entropy of the fatigue driving state.

Further, in step S3, the ratio of the concentration degree to the relaxation degree is analyzed by using a correlation coefficient analysis method; there is a correlation between concentration and relaxation over the same time period; therefore, a correlation coefficient analysis method is introduced, the ratio of the concentration degree to the relaxation degree is analyzed and observed, and the interval of the ratio represents the fatigue degree.

Further, in the step S4, according to specific values of the FP1 electrode θ wave, Low _ α wave, High _ α wave, Low _ β wave, and High _ β wave, the power spectral density curve (θ + α)/β is obtained through processing:

wherein α ═ 2 (Low _ α + High _ α), β ═ 2 (Low _ βα + High _ βα);

and further calculating the change curve of the power spectral density ratio of (theta + α)/β along with time to obtain the fatigue index interval threshold value.

Further, in step S5, the image is grayed by the weighted average method, and the obtained grayscale image is binarized, so that the image can be simplified, the data can be reduced, and the contour of the object of interest can be highlighted.

Further, in step S6, the image is first preprocessed by using the machine learning classifier, the contrast of the image is enhanced after graying and binarization, unnecessary training data functions can be eliminated by using the machine learning classifier, and the image is placed on the key training data;

then, grid marking is carried out on the face area, edge information is extracted by using a Canny algorithm, a face region of interest is identified, and the face region is detected and marked.

Further, in step S7, the position of the eye is determined by using an appropriate threshold, i.e. the edge of the eye area is found by using the cable edge detection algorithm through the image histogram and the color histogram; the Soble edge detector uses two masks, one vertical and one horizontal, and extends the Soble edge detection mask to 5 x 5 dimensions; identifying a mouth region by using a K-means clustering algorithm; k-means performs image segmentation using an iterative algorithm that minimizes the sum of the distances from each object to its cluster centroid over all clusters.

Further, in step S8, the working method of the facial multi-feature module determining whether the driver is tired according to the blinking state of the driver is as follows: when a driver is in a fatigue state, the closing time of eyes can be prolonged, the percentage K value of the closing time of the eyes in unit time is adopted for eye fatigue identification, and the larger the value is, the deeper the driving fatigue degree is; the formula is as follows:

Figure BDA0002256374980000041

further, in step S9, the working method of the facial multi-feature module determining whether the driver is tired according to the yawning state of the driver is as follows: based on the detection and the positioning of the mouth of the human face, the coordinates of the mouth are determined by adopting the following formula, wherein xfaceAnd yfaceRespectively representing the coordinate value of the lower left origin, W, of the target detection frame detected based on the machine learning algorithmfaceAnd HfaceThe width and the height of the target detection frame are respectively; (x)0,y0) Representing the coordinates of the upper left corner of the rectangular frame of the mouth, WmouthAnd HmouthWidth and height of the rectangular frame of the human mouth area are respectively represented:

the width-to-height ratio of the mouth is adopted for yawning judgment, and the following formula is adopted:

Figure BDA0002256374980000043

when the mouth is in a normal state, the value of P is obviously greater than 1; when the device is in a yawning state, the P value obviously becomes smaller gradually until the P value is less than 1; identifying the fatigue state of the mouth by calculating the frequency that the P value is greater than or equal to 1 within a certain time, the formula is as follows:

Figure BDA0002256374980000044

further, in step S10, the operation method of the facial multi-feature module determining whether the driver is tired according to the eye movement state of the driver is as follows: analyzing the fixation direction by researching the Eye Focus Position (EEP) related information of Eye jump amplitude and Eye jump durationAnd the change of the fixation time, the driving state of the driver can be detected; fatigue is judged by the following formula, wherein S1Number of image frames indicating that the gaze direction deviates from the normal range:

Figure BDA0002256374980000045

further, in step S11, the fatigue monitoring module performs fusion of each characteristic index by using an LVQ-BP artificial neural network to form a comprehensive fatigue index; classifying each characteristic index through an LVQ (Linear vector quantization) neural network, then performing multi-characteristic fusion on the classified indexes based on a BP neural network to form a final comprehensive index, and establishing a fatigue driving detection system based on the LVQ-BP neural network to realize real-time detection on the driving state of a driver. When the comprehensive fatigue index exceeds the fatigue threshold value in the driving process of the driver, judging the fatigue of the driver, sending out early warning, and reminding the driver to stop for rest as soon as possible. After a large number of experiments, the fatigue state detection accuracy of the method is up to 92%, and compared with single bioelectricity characteristic detection and single behavior characteristic detection, the fatigue judgment accuracy is obviously improved.

Compared with the prior art, the invention has the beneficial effects that: the invention provides a fatigue driving detection method based on bioelectricity and behavior characteristic fusion, which can realize real-time detection of the driving state of a driver, effectively process the acquired information in a mode of not influencing the driving of the driver, and compared with the traditional fatigue detection, the method integrating multiple signal characteristics has more stable and efficient results and high accuracy of detection results, and ensures the reliability and the high efficiency of the system.

Drawings

Fig. 1 is an overall flowchart of a fatigue driving detection algorithm based on fusion of multiple biosignals according to an embodiment of the present invention.

Detailed Description

The technical scheme of the invention is further described in detail by combining the drawings and the detailed implementation mode:

as shown in figure 1 of the drawings, in which,

a fatigue driving detection method based on bioelectricity and behavior feature fusion is specifically carried out according to the following steps:

step s1, 50 subjects of 20-26 years of age were selected to participate in the experiment, with a driver's license and a healthy physical condition. All subjects in the experiment were asked to sleep between 00:00 and 06:00 the day before, and were not allowed to use any refreshers. And the experimental selection was made between 13:00 and 14:00, where one is most prone to fatigue.

Step s 2: adjusting and initializing equipment, wearing electroencephalogram acquisition equipment with an embedded TGAM module for a tested person, turning on a Bluetooth switch and connecting the Bluetooth switch with a computer, and testing and checking connection quality; the camera is aligned to the face to be tested, the information of the face to be tested is obtained, an eye information template for opening and closing the eyes to be tested is collected, and the training operation is carried out by opening the mouth and punching a yawning template. After the basic data are collected in the experimental process, the fatigue induction stage of the tested object is carried out.

Step S1: according to the scheme, the electroencephalogram signal module firstly uses the TGAM module to collect original electroencephalogram signals, concentration degree and relaxation degree, and then decomposes the collected electroencephalogram signals by adopting wavelet packets.

Forehead electrode FP1, which is closely related to the degree of fatigue, was selected for illustration.

According to an energy time domain calculation formula:

E=∑t|f(t)|2(1);

the method comprises the steps of obtaining energy values of theta and α waves and a change curve, wherein E is an energy value, t is time, f (t) is theta, α is a change curve corresponding to the three waves, when β waves are dominant, the consciousness to be tested is awake, when the theta waves and α waves are dominant, the consciousness of a person is fuzzy and even sleeps, and the change trend of single rhythm energy cannot be used for objectively measuring the change of fatigue degree because fatigue is a result of the combined action of multiple factors, therefore, the energy change of multiple rhythms is comprehensively analyzed, and a curve of fatigue index (α + theta)/β changing along with time is obtained.

Step S2: and reconstructing the FP1 electrode brain waves after wavelet packet decomposition to obtain a group of brain electrical signals. The sample entropy is expressed using SampEn (m, r, N), where the selection parameter m is 2, the similarity margin r is 0.2SD, and the sample size N is 1000. Calculating the sample entropy every 10 seconds to obtain an average sample entropy sequence, and analyzing the sample entropy sequence to know that the sample entropy values of different driving states have certain difference, wherein the sample entropy values of the non-fatigue driving state are concentrated between 0.6 and 0.9, and the sample entropy values of the fatigue driving state are concentrated between 0.3 and 0.6, namely the sample entropy of the non-fatigue driving state is higher than the sample entropy of the fatigue driving state.

Step S3: concentration (Attention) and relaxation (mediation), both of which have a correlation in the same time period. The correlation coefficient of the concentration and the relaxation is used as the characteristic value of the classification. The ratio of the concentration degree to the relaxation degree is analyzed and observed, and the interval of the ratio represents the fatigue degree. The difference between concentration and relaxation is still more obvious from the knowledge after simple pre-processing. In this embodiment, the concentration value is significantly higher than the relaxation degree between t120 and t150, and the concentration value is significantly lower than the relaxation degree between t330 and t 390. The change in the ratio can be used to assess the degree of fatigue.

And step S4, obtaining a power spectral density curve (theta + α)/β through processing according to specific values of the FP1 electrode theta wave, the Low _ α wave, the High _ α wave, the Low _ β wave and the High _ β wave:

wherein α ═ 2 (Low _ α + High _ α), β ═ 2 (Low _ β + High _ β).

Further, a Power Spectral Density (PSD) ratio of (θ + α)/β may be calculated as a function of time to obtain a fatigue index interval threshold.

Step S5: the gray processing of the acquired image is firstly carried out, and then the reprocessing of the gray image brings great help for the subsequent face detection, wherein a weighted average algorithm is used for the gray processing of the image. The obtained grayscale image is subjected to a binarization process, which is a reprocessing process, so that the image is simplified, the data is reduced, and the target contour of the region of interest can be highlighted.

Step S6: further, grid marking is carried out on the Face area of interest, edge information is extracted through a Canny algorithm, a Face area of interest (expressed as Face-ROI) is identified, point tracking is carried out through a KLT algorithm, the Face area of the driver is detected through a WiOLa-Jones target detection algorithm, and the Face area of the driver is marked through a rectangular frame.

Step S7: the position of the eye is determined by using an appropriate threshold, i.e. the edge of the eye region is found by image histogram and color histogram using the cable edge detection algorithm. The Soble edge detector uses two masks, one vertical and one horizontal, and extends the Soble edge detection mask to 5 x 5 dimensions. Mouth regions are identified using a K-means clustering algorithm. K-means performs image segmentation using an iterative algorithm that minimizes the sum of the distances from each object to its cluster centroid over all clusters.

Step S8: according to the scheme, the working method for judging whether the driver is tired or not by the face multi-feature module according to the blinking state of the driver comprises the following steps: when the driver is in a fatigue state, the closing time of the eyes can be prolonged, and the fatigue identification of the eyes is carried out by adopting the percentage K value of the closing time of the eyes in unit time. The larger the value, the deeper the driving fatigue. The formula is as follows:

Figure BDA0002256374980000071

and (4) carrying out fatigue judgment by calculating the change of the K value in each minute. And finally, analyzing the experimental data to judge that the system is in the eye fatigue state when the K value is more than 16.18 percent.

Step S9: according to the scheme, the working method for judging whether the driver is tired or not by the facial multi-feature module according to the yawning state of the driver comprises the following steps: based on the detection and the positioning of the mouth of the human face, the coordinates of the mouth are determined by adopting the following formula:

Figure BDA0002256374980000072

wherein xfaceAnd yfaceRespectively representing the coordinate value of the lower left origin, W, of the target detection frame detected based on the machine learning algorithmfaceAnd HfaceThe width and height of the target detection frame, respectively. (x)0,y0) Representing the coordinates of the upper left corner of the rectangular frame of the mouth, WmouthAnd HmouthWidth and height of the rectangular frame of the human mouth area are respectively represented:

the width-to-height ratio of the mouth is adopted for yawning judgment, and the following formula is adopted:

Figure BDA0002256374980000073

when the mouth is in the normal state, it is clear that the value of P is greater than 1; when in the yawning state, the P value obviously becomes smaller and smaller until less than 1. Identifying the fatigue state of the mouth by calculating the frequency that the P value is greater than or equal to 1 within a certain time, the formula is as follows:

Figure BDA0002256374980000081

fatigue judgment is carried out by calculating the L value in every minute, and people can know that 5 seconds are needed for once yawning through priori knowledge. When the number of times of breathing out is three times per minute and the increasing amplitude of the L value is obviously increased gradually, and the fatigue degree of the driver is deepened gradually, namely the L value is more than 25%, the driver is judged to be in a fatigue state, and the experimental result is in accordance with the conventional principle.

Step 10: according to the scheme, the working method for judging whether the driver is tired or not by the facial multi-feature module according to the eye movement state of the driver comprises the following steps: through the research on the jump amplitude of the Eye Focus Position (EEP) and the relevant information of the Eye jump duration, the change of the Eye jump of the driver is analyzed, and the driving state of the driver can be detected. Fatigue is judged by the following formula, wherein S1Number of image frames indicating small eye jump:

Figure BDA0002256374980000082

experiments show that the normal focus range is 2.5 +/-1.8 mm, and when the eye jump amplitude of the driver is reduced to almost zero swing, the phenomenon that the vision of the driver is dull begins to appear.

According to the conventional theory, if the vision is dull for 3 times or more per minute, the driver may be in fatigue state. And when the number of frames of the information of the eye jump of the driver deviated from the normal range in every minute exceeds 180 frames, namely S is more than 21.98%, judging that the driver is in a fatigue state.

Step 11: according to the scheme, the fatigue monitoring module adopts a BP-LVQ artificial neural network combination model to fuse all characteristic indexes to form a comprehensive fatigue index. Classifying each characteristic index through an LVQ (learning Vector quantization) neural network, then performing multi-characteristic fusion on the classified indexes based on a BP neural network to form a final comprehensive index, and establishing a fatigue driving detection system based on the LVQ-BP neural network to realize real-time monitoring on the driving state of a driver. The LVQ neural network adopts a full connection mode between the input layer and the neuron of the competition layer, and adopts a local connection mode between the neuron of the competition layer and the neuron of the output layer. The connection weight is constant at 1. The experiment was uniformly scaled with the calculated Z-Score value to ensure comparability between data:

in the above formula, μ is the mean of the entire data, σ represents the standard deviation of the entire data, and x represents the characteristic. And the BP-LVQ structure has 5 hidden layers in total, and one input and one output. The network connection weight and the initial value of the neuron threshold are set in (-1,1), and the minimum error epsilon and the maximum training time n are set. The number of training iterations is 19, and the loss end value is 0.00191.

When the comprehensive fatigue index reaches the fatigue threshold value, the system can timely send out early warning and remind a driver to stop for rest as soon as possible. After a large number of experiments, the fatigue state detection accuracy of the integrated index after fusion is up to 92%, and compared with single bioelectricity characteristic detection and single behavior characteristic detection, the fatigue judgment accuracy is obviously improved. As shown in table 1.

Figure BDA0002256374980000092

The method has the advantages that multiple characteristic signal indexes are fused to carry out fatigue detection on the driver, the instability of the traditional single signal detection fatigue is overcome, the collected information is effectively processed in a mode of not influencing the driving of the driver, compared with the traditional fatigue detection, the method which fuses multiple signal characteristics is more stable and efficient in result, and the reliability and the efficiency of the system are ensured.

The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种排尿预测方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!