We propose a framework of hand articulation detection from a monocular depth image using curvature scale space(CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and fin...We propose a framework of hand articulation detection from a monocular depth image using curvature scale space(CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data;moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.展开更多
文摘现有的面向大规模数据分类的支持向量机(support vector machine,SVM)对噪声样本敏感,针对这一问题,通过定义软性核凸包和引入pinball损失函数,提出了一种新的软性核凸包支持向量机(soft kernel convex hull support vector machine for large scale noisy datasets,SCH-SVM).SCH-SVM首先定义了软性核凸包的概念,然后选择出能代表样本在核空间几何轮廓的软性核凸包向量,再将其对应的原始空间样本作为训练样本并基于pinball损失函数来寻找两类软性核凸包之间的最大分位数距离.相关理论和实验结果亦证明了所提分类器在训练时间,抗噪能力和支持向量数上的有效性.
基金supported by the National Natural Science Foundation of China(Nos.6122700461370120+5 种基金6139051061300065and 61402024)Beijing Municipal Natural Science Foundation,China(No.4142010)Beijing Municipal Commission of Education,China(No.km201410005013)the Funding Project for Academic Human Resources Development in Institutions of Higher Learning under the Jurisdiction of Beijing Municipality,China
文摘We propose a framework of hand articulation detection from a monocular depth image using curvature scale space(CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data;moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.