期刊文献+

时空特征局部保持的运动视频关键帧提取 被引量:5

Key frame extraction of motion video based on spatial-temporal feature locally preserving
下载PDF
导出
摘要 为提高运动视频关键帧的运动表达能力和压缩率,提出柔性姿态估计和时空特征嵌入结合的运动视频关键帧提取技术。首先,利用人体动作的时间连续性保持建立具有时间约束限制的柔性部件铰接人体(ST-FMP)模型,通过非确定性人体部位动作连续性约束,采用N-best算法估计单帧图像中的人体姿态参数;接着,采用人体部位的相对位置和运动方向描述人体运动特征,通过拉普拉斯分值法实施数据降维,获得局部拓扑结构表达能力强的判别性人体运动特征向量;最后,采用迭代自组织数据分析技术(ISODATA)算法动态地确定关键帧。在健美操动作视频关键帧提取实验中,ST-FMP模型将柔性混合铰接人体模型(FMP)的非确定性人体部位的识别准确率提高约15个百分点,取得了81%的关键帧提取准确率,优于KFE和运动块的关键帧算法。所提算法对人体运动特征和人体姿态敏感,适用于运动视频批注审阅。 To improve the motion expression and compression rate of the motion video key frames, a dynamic video frame extraction technique based on flexible pose estimation and spatial-temporal feature embedding was proposed. Firstly, a Spatial-Temporal feature embedded Flexible Mixture-of-Parts articulated human model (ST-FMP) was designed by preserving the spatial-temporal features of body parts, and the N-best algorithm was adopted with spatial-temporal locally preserving of uncertain body parts to estimate the body configuration in a single frame based on ST-FMP. Then, the relative position and motion direction of the human body were used to describe the characteristics of the human body motion. The Laplacian scoring algorithm was used to implement dimensionality reduction to obtain the discriminant human motion feature vector with local topological structure. Finally, the ISODATA (Iterative Self-Organizing Data Analysis Technique) algorithm was used to dynamically determine the key frames. In the key frame extraction experiment on aerobics video, compared to articulated human model with Flexible Mixture-of-Parts (FMP) and motion block, the accuracy of uncertain body parts by using ST-FMP was 15 percentage points higher than that by using FMP, achieved 81%, which was higher than that by using Key Frames Extraction based on prior knowledge (KFE) and key frame extraction based on motion blocks. The experimental results on key frame extraction for calisthenics video show that the proposed approach is sensitive to motion feature selection and human pose configuration, and it can be used for sports video annotation.
出处 《计算机应用》 CSCD 北大核心 2017年第9期2605-2609,共5页 journal of Computer Applications
基金 河南省科技攻关项目(152102210329 172102310635)~~
关键词 关键帧提取 运动视频 姿态估计 柔性混合铰接人体模型 特征选择 key frame extraction motion video pose estimation articulated human model with Flexible Mixture-of-Parts (FMP) feature selection
  • 相关文献

参考文献5

二级参考文献76

  • 1衣治安,吕曼.基于多分类支持向量机的入侵检测方法[J].计算机工程,2007,33(15):167-169. 被引量:7
  • 2Le Q V, Zou W Y, Yeung S Y,et al. Learning hierarchical invariant spatio-temporal features for actionrecognition with independent subspace analysis [C] // IEEE Conference on Computer Vision and PatternRecognition. Providence,USA: IEEE Press, 2011 : 3361-3368. 被引量:1
  • 3Ma A, Yuen P, Zou W,et al. Supervised spatio-temporal neighborhood topology learning for action recognition[J]. IEEE Transactions on Circuits and Systems for Video Technology,2013,23(8) : 1447-1460. 被引量:1
  • 4Schindler K, Van Gool L. Action snippets: How many frames does human action recognition require? [_C]//IEEE Conference on Computer Vision and Pattern Recognitioru Anchorage, USA: IEEE Press, 2008: 1-8. 被引量:1
  • 5Cao X,Ning B,Yan P,et al. Selecting key poses on manifold for pairwise action recognition[J]. IEEETransactions on Industrial Informatics , 2012,8(1) : 168-177. 被引量:1
  • 6Liu L, Shao L, Zhen X, et al. Learning discriminative key poses for action recognition [J]. IEEETransactions on Cybernetics , 2013,43(6) ; 1860-1870. 被引量:1
  • 7Jiang Z,Lin Z,Davis L S. Recognizing human actions by learning and matching shape-motion prototypetrees[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence 9 2012,34(3): 533-547. 被引量:1
  • 8Wang H,Klaser A,Schmid C,et al. Action recognition by dense trajectories[C]//IEEE Conference onComputer Vision and Pattern Recognition. Providence, USA: IEEE Press, 2011 : 3169-3176. 被引量:1
  • 9Yang Y,Saleemi I,Shah M. Discovering motion primitives for unsupervised grouping and one-shotlearning of human actions, gestures, and expressions[J]. IEEE Transactions on Pattern Analysis andMachine Intelligence, 2013,35(7) : 1635-1648. 被引量:1
  • 10Felzenszwalb P F, Girshick R B, McAllester D,et al. Object detection with discriminatively trained part-basedmodels[J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010,32(9): 1627-1645. 被引量:1

共引文献139

同被引文献60

引证文献5

二级引证文献46

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部