期刊文献+

人脸表情动画与语音的典型相关性分析 被引量:4

The Canonical Correlation Analysis of Facial Expression Animation and Speech
下载PDF
导出
摘要 表情动画作为语音驱动人脸动画的一部分,在增加人脸动画逼真性方面起着重要的作用,但已有的工作没有定量分析人脸表情动画与语音之间的关系.文中通过研究人脸表情动画与语音的相关性,采用典型相关性分析方法(CCA)定量分析两者之间的内在联系,得出这些关系直观的量化的结论.首先计算人脸表情动画与语音的典型相关性系数,衡量两者的相关程度;然后分析人脸表情动画与语音的典型负荷、典型交叉负荷等数据,并挖掘两者内部各分量之间的联系,由此得出直观的量化的结论.最后验证了结论的稳定性.分析结果表明两者具有强相关性,并揭示了人脸表情动画各成分与语音声学特征之间的具体内在联系.文中成果可为语音驱动人脸动画技术提供理论参考及结果评价依据. The facial expression animation,as a component of the speech driven facial animation,plays a very important role in enhancing the verisimilitude of facial animation.However,previous works haven't quantitatively analyzed the correlations between facial expression motion and speech.In this paper,we adopt canonical correlation analysis(CCA) to quantitatively analyze the correlations between facial expression motion and speech,and reach the intuitive and quantitative conclusions.First,we calculate the CCA between facial expression motion and speech to measure their degree of correlation.Then we analyze the Canonical Loadings,Canonical Cross Loadings and other analysis data between facial expression motion and speech,find out the specific internal relations and draw the intuitive and quantitative conclusions.Finally,we verify the stability of the conclusions.The analysis result shows that the two are strongly correlated and reveals the specific internal relations between the components of facial expression motion and the acoustic features.This article can be used as theoretical reference and judging criterion for speech driven facial animation technique.
出处 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2011年第5期805-812,共8页 Journal of Computer-Aided Design & Computer Graphics
基金 国家自然科学基金(60970086) 国家自然科学基金委广东联合基金重点项目(U0935003)
关键词 人脸表情动画 典型相关性分析 facial expression animation canonical correlation analysis
  • 相关文献

参考文献13

  • 1Mehrabian A.Silent messages[OL].[2010-07-07].http://en.wikipedia.org/wiki/Albert_Mehrabian. 被引量:1
  • 2Busso C,Narayanan S S.Interrelation between speech and facial gestures in emotional utterances:a single subject study[J].IEEE Transactions on Audio,Speech,and Language Processing,2007,15(8):2331-2347. 被引量:1
  • 3Busso C,Deng Z G,Neumann U,et al.Natural head motion synthesis driven by acoustic prosodic features[J].Computer Animation and Virtual Worlds,2005,16(3/4):283-290. 被引量:1
  • 4Deng Z G,Neumann U,Lewis J P,et al.Expressive facial animation synthesis by learning speech co-articulation and expression spaces[J].IEEE Transactions on Visualization and Computer Graphics,2006,12(6):1523-1534. 被引量:1
  • 5Chuang E S,Deshpande H,Bregler C.Facial expression space learning[C] //Proceedings of the 10th Pacific Conference on Computer Graphics and Applications.Los Alamitos:IEEE Computer Society Press,2002:68-76. 被引量:1
  • 6Cao Y,Faloutsos P,Pighin F.Unsupervised learning for speech motion editing[C] //Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation.Aire-la-Ville:Eurographics Association Press,2003:225-231. 被引量:1
  • 7Brand M.Voice puppetry[C] //Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques.New York:ACM Press,1999:21-28. 被引量:1
  • 8Hotelling H.Relations between two sets of variates[J].Biometrika,1936,28(3/4):321-377. 被引量:1
  • 9孙权森,曾生根,杨茂龙,王平安,夏德深.基于典型相关分析的组合特征抽取及脸像鉴别[J].计算机研究与发展,2005,42(4):614-621. 被引量:29
  • 10徐晓娜,穆志纯,潘秀琴,赵悦.基于KCCA的特征融合方法及人耳人脸多模态识别[J].华南理工大学学报(自然科学版),2008,36(9):117-121. 被引量:3

二级参考文献45

共引文献34

同被引文献61

  • 1陈宝平,赵俊岚,尹志凌.双线性插值算法的一种快速实现方式[J].北京电子科技学院学报,2004,12(4):21-23. 被引量:21
  • 2曾丹,程义民,葛仕明,李杰.人眼3D肌肉控制模型[J].计算机辅助设计与图形学学报,2006,18(11):1710-1716. 被引量:10
  • 3Mehrabian A. Silent messages[M]. Belmont: Wadsworth, 1971. 被引量:1
  • 4CassellJ, Pelachaud C, Badler N, et al. Animated conversation: rule-based generation of facial expression, gesture &. spoken intonation for multiple conversational agent[CJ !/Computer Graphics Proceedings, Annual Conference Series, ACM SlGGRAPH. New York: ACM Press. 1994: 413-420. 被引量:1
  • 5Sifakis E, Selle A, Robinson-Mosher A, et al. Simulating speech with a physics-based facial muscle model[CJ II Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. Aire-Ia-Ville , Eurographics Association Press, 2006: 261-270. 被引量:1
  • 6Cao v, Faloutsos r. Pighin F. Unsupervised learning for speech motion editing[CJ II Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. Aire-la - Ville: Eurographics Association Press, 2003: 225-231. 被引量:1
  • 7Deng Z, Neumann U, LewisJ P, et al. Expressive facial animation synthesis by learning speech co-articulation and expression spaces[J]. IEEE Transactions on Visualization and Computer Graphics, 2006, 12(6): 1523-1534. 被引量:1
  • 8Chuang E S, Deshpande H, Bregler C. Facial expression space learning[CJ IIProceedings of the 10th Pacific Conference on Computer Graphics and Applications. Washington DC: IEEE Computer Society Press, 2002: 68-76. 被引量:1
  • 9Blanz V, Basso C, Poggio T, et al . Reanimating faces in images and video[J]. Computer Graphics Forum, 2003, 22 (3): 641-650. 被引量:1
  • 10Brand M. Voice puppetry[CJ II Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH. New York: ACM Press, 1999: 21-28. 被引量:1

引证文献4

二级引证文献12

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部