摘要
目的疲劳驾驶是引发车辆交通事故的主要原因之一,针对现有方法在驾驶员面部遮挡情况下对眼睛状态识别效果不佳的问题,提出了一种基于自商图—梯度图共生矩阵的驾驶员眼部疲劳检测方法。方法利用以残差网络(residual network,Res Net)为前置网络的SSD(single shot multibox detector)人脸检测器来获取视频中的有效人脸区域,并通过人脸关键点检测算法分割出眼睛局部区域图像;建立驾驶员眼部的自商图与梯度图共生矩阵模型,分析共生矩阵的数字统计特征,选取效果较好的特征用以判定人眼的开闭状态;结合眼睛闭合时间百分比(percentage of eyelid closure,PERCLOS)与最长闭眼持续时间(maximum closing duration,MCD)两个疲劳指标来判别驾驶员的疲劳状态。结果在六自由度汽车性能虚拟仿真实验平台上模拟汽车驾驶,采集并分析驾驶员面部视频,本文方法能够有效识别驾驶员面部遮挡时眼睛的开闭状态,准确率高达99.12%,面部未遮挡时的识别精度为98.73%,算法处理视频的速度约为32帧/s。对比方法 1采用方向梯度直方图特征与支持向量机分类器相结合的人脸检测算法,并以眼睛纵横比判定开闭眼状态,在面部遮挡时识别较弱;以卷积神经网络(convolutional neural network,CNN)判别眼睛状态的对比方法 2虽然在面部遮挡情况下的准确率高达98.02%,但眨眼检测准确率效果不佳。结论基于自商图—梯度图共生矩阵的疲劳检测方法能够有效识别面部遮挡时眼睛的开闭情况和驾驶员的疲劳状态,具有较快的检测速度与较高的准确率。
Objective Driver fatigue is known to be directly related to road safety and is a leading cause of traffic fatalities and injuries of seated drivers. Previous studies used many fatigue driving detection methods to detect and analyze the fatigue status of seated drivers. These methods aim to improve detection accuracy and usually include driving behavioral features (e. g.,steering wheel motion,lane keeping) and physiological features (e. g.,eye and face movement,heart rate variability,electroencephalogram,electroocoulogram,electrocardiogram). Physiological features,such as eye movement,are widely used to predict driver fatigue because they are nonintrusive and independent on the driving context. However,fatigue driving detection under occluded face conditions is challenging and needs a robust algorithm of eye feature extraction. The literature showed that most eye tracking methods require high-resolution images. This condition leads to low processing speed and difficultly on real-time eye tracking. In this study,a fatigue driving detection method based on self-quotient image (SQI) and gradient image co-occurrence matrix was presented. This improved method is based on gray level and gradient co-occurrence matrix. The proposed method provides a new approach for predicting fatigue status on driver fatigue applications in a short time. Method In this study,a six-degree of freedom vibration table and driving simulator were used to model the driving context. The eye fatigued state of seated driver in real time was recorded by using an RGB camera mounted in the front of the driver. A single shot multibox detector face detection algorithm was used to extract the driver’s facial region from the recorded video with Res Net10 as the front network. An ensemble of regression trees facial landmark location algorithm was used to calibrate the driver’s eye area for each frame of the recorded video. A gray-level image and an SQI of each frame was combined to obtain the co-occurrence matrix of the driver’s eye image. The stati
作者
潘剑凯
柳政卿
王秋成
Pan Jiankai;Liu Zhengqing;Wang Qiucheng(College of Mechanical Engineering,Zhejiang University of Technology,Hangzhou 310023,China)
出处
《中国图象图形学报》
CSCD
北大核心
2021年第1期154-164,共11页
Journal of Image and Graphics