摘要
为了解决现有瞳孔定位方法易受瞳孔图像质量的约束,采用CNN提取图像的局部特征,通过Transformer的编码器获得全局依赖关系,发掘出更为准确的瞳孔中心信息,在公开数据集上对比了主流的DeepEye和VCF瞳孔定位模型。结果表明:提出的基于混合结构的Vision Transformer瞳孔定位方法在5像素误差内瞳孔中心的检测率比DeepEye提升了30%,比VCF提升了20%。
Existing pupil localization methods are easily constrained by the quality of the pupil image.To solve this problem,CNN is adopted to extract the local features of images,and then the encoder of a transformer is used to obtain global dependencies,with more accurate information on the pupil center excavated.And then a comparison is made between the mainstream DeepEye and VCF pupil localization models on the public dataset.It is found that the proposed hybrid structure based Vision Transformer pupil localization method has a detection rate 30%higher than that of DeepEye and 20%higher than that of VCF within 5 pixel error range.
作者
王利
王长元
WANG Li;WANG Changyuan(School of Opto Electronical Engineering,Xi’an Technological University,Xi’an 710021,China;School of Computer Science and Engineering,Xi’an Technological University,Xi’an 710021,China)
出处
《西安工业大学学报》
CAS
2023年第6期561-567,共7页
Journal of Xi’an Technological University
基金
国家自然科学基金项目(52072293)。
关键词
深度学习
瞳孔定位
视觉转换器
分散注意力残差网络
deep learning
pupil localization
vision transformer
Residual Convolutional Neural Networks with Split attention(ResNeSt)