针对在无约束环境下静态手势在识别过程中准确率不高的问题,本文提出了一种融合手部骨架灰度图(Grayscale Image of Hand Skeleton,GHS)的深度神经网络,使用手部关键点及其相互关联性构建手部骨架灰度图。网络的输入为GHS图像和RGB图像...针对在无约束环境下静态手势在识别过程中准确率不高的问题,本文提出了一种融合手部骨架灰度图(Grayscale Image of Hand Skeleton,GHS)的深度神经网络,使用手部关键点及其相互关联性构建手部骨架灰度图。网络的输入为GHS图像和RGB图像,主干网络为yolov3,添加了扩展卷积残差模块,在GHS图像和RGB图像进行特征融合后,通过SE模块对每个通道上的特征进行缩放,采用RReLU激活函数来代替Leaky ReLU激活函数。通过手部关键点及其相互间的连接信息增强手部图像特征,增大手势的类间差异,同时降低无约束环境对手势识别的影响,以提高手势识别的准确率。实验结果表明,在Microsoft Kinect&Leap Motion数据集上相比其他方法,本文方法的平均准确率达到最高,为99.68%;在Creative Senz3D数据集上相比其他方法,本文方法平均准确率达到最高,为99.8%。展开更多
The gender recognition problem has attracted the attention of the computer vision community due to its importance in many applications(e.g.,sur-veillance and human–computer interaction[HCI]).Images of varying levels ...The gender recognition problem has attracted the attention of the computer vision community due to its importance in many applications(e.g.,sur-veillance and human–computer interaction[HCI]).Images of varying levels of illumination,occlusion,and other factors are captured in uncontrolled environ-ments.Iris and facial recognition technology cannot be used on these images because iris texture is unclear in these instances,and faces may be covered by a scarf,hijab,or mask due to the COVID-19 pandemic.The periocular region is a reliable source of information because it features rich discriminative biometric features.However,most existing gender classification approaches have been designed based on hand-engineered features or validated in controlled environ-ments.Motivated by the superior performance of deep learning,we proposed a new method,PeriGender,inspired by the design principles of the ResNet and DenseNet models,that can classify gender using features from the periocular region.The proposed system utilizes a dense concept in a residual model.Through skip connections,it reuses features on different scales to strengthen dis-criminative features.Evaluations of the proposed system on challenging datasets indicated that it outperformed state-of-the-art methods.It achieved 87.37%,94.90%,94.14%,99.14%,and 95.17%accuracy on the GROUPS,UFPR-Periocular,Ethnic-Ocular,IMP,and UBIPr datasets,respectively,in the open-world(OW)protocol.It further achieved 97.57%and 93.20%accuracy for adult periocular images from the GROUPS dataset in the closed-world(CW)and OW protocols,respectively.The results showed that the middle region between the eyes plays a crucial role in the recognition of masculine features,and feminine features can be identified through the eyebrow,upper eyelids,and corners of the eyes.Furthermore,using a whole region without cropping enhances PeriGender’s learning capability,improving its understanding of both eyes’global structure without discontinuity.展开更多
文摘针对在无约束环境下静态手势在识别过程中准确率不高的问题,本文提出了一种融合手部骨架灰度图(Grayscale Image of Hand Skeleton,GHS)的深度神经网络,使用手部关键点及其相互关联性构建手部骨架灰度图。网络的输入为GHS图像和RGB图像,主干网络为yolov3,添加了扩展卷积残差模块,在GHS图像和RGB图像进行特征融合后,通过SE模块对每个通道上的特征进行缩放,采用RReLU激活函数来代替Leaky ReLU激活函数。通过手部关键点及其相互间的连接信息增强手部图像特征,增大手势的类间差异,同时降低无约束环境对手势识别的影响,以提高手势识别的准确率。实验结果表明,在Microsoft Kinect&Leap Motion数据集上相比其他方法,本文方法的平均准确率达到最高,为99.68%;在Creative Senz3D数据集上相比其他方法,本文方法平均准确率达到最高,为99.8%。
基金The authors are thankful to the Deanship of Scientific Research,King Saud University,Riyadh,Saudi Arabia for funding this work through the Research Group No.RGP-1439-067.
文摘The gender recognition problem has attracted the attention of the computer vision community due to its importance in many applications(e.g.,sur-veillance and human–computer interaction[HCI]).Images of varying levels of illumination,occlusion,and other factors are captured in uncontrolled environ-ments.Iris and facial recognition technology cannot be used on these images because iris texture is unclear in these instances,and faces may be covered by a scarf,hijab,or mask due to the COVID-19 pandemic.The periocular region is a reliable source of information because it features rich discriminative biometric features.However,most existing gender classification approaches have been designed based on hand-engineered features or validated in controlled environ-ments.Motivated by the superior performance of deep learning,we proposed a new method,PeriGender,inspired by the design principles of the ResNet and DenseNet models,that can classify gender using features from the periocular region.The proposed system utilizes a dense concept in a residual model.Through skip connections,it reuses features on different scales to strengthen dis-criminative features.Evaluations of the proposed system on challenging datasets indicated that it outperformed state-of-the-art methods.It achieved 87.37%,94.90%,94.14%,99.14%,and 95.17%accuracy on the GROUPS,UFPR-Periocular,Ethnic-Ocular,IMP,and UBIPr datasets,respectively,in the open-world(OW)protocol.It further achieved 97.57%and 93.20%accuracy for adult periocular images from the GROUPS dataset in the closed-world(CW)and OW protocols,respectively.The results showed that the middle region between the eyes plays a crucial role in the recognition of masculine features,and feminine features can be identified through the eyebrow,upper eyelids,and corners of the eyes.Furthermore,using a whole region without cropping enhances PeriGender’s learning capability,improving its understanding of both eyes’global structure without discontinuity.