摘要
结合神经网络、并行多特征向量和注意力机制,有助于提高语音情感识别的性能。基于此,从前期已经提取的DFCC参数入手,提取I-DFCC和Mid-DFCC特征参数,利用Fisher比选取特征参数构成F-DFCC;再将F-DFCC特征参数与LPCC、MFCC特征参数进行对比并融合,输入到含双向LSTM网络及注意力机制的ECAPA-TDNN模型中;最后,在CASIA和RAVDESS数据集上验证F-DFCC融合特征参数的有效性。实验结果表明:与单一的F-DFCC特征参数相比,F-DFCC融合特征的准确率WA、召回率UA、F1-score在CASIA数据集上分别提高0.035 1、0.031 1、0.031 3;在RAVDESS数据集上分别提高0.024 5、0.035 8、0.033 2。在两个数据集中,surprised情感的识别准确率最高,为0.94;F-DFCC融合特征参数的6种和8种情感识别率与其他特征参数相比均有所提升。
The performance of speech emotion recognition can be improved by combining neural networks,parallel multiple feature vectors,and attention mechanisms.On this basis,starting from the previously extracted DFCC parameters,I-DFCC and Mid DFCC feature parameters are extracted,and Fisher′s ratio is used to select feature parameters to form F-DFCC.F-DFCC feature parameters are compared and fused with LPCC and MFCC feature parameters,and then they are inputted into the ECAPATDNN model with bidirectional LSTM network and attention mechanism.The effectiveness of F-DFCC fusion feature parameters is verified on the CASIA and RAVDESS datasets.The experimental results show that in comparison with single F-DFCC feature parameter,the accuracy WA,recall UA,and F1-score of F-DFCC fusion features are improved by 0.035 1,0.031 1,and 0.031 3on the CASIA dataset,respectively,improved by 0.024 5,0.035 8,and 0.033 2 on the RAVDESS dataset,respectively.In the two datasets,the highest recognition accuracy was realized for supervised emotions,at 0.94.In comparison with other feature parameters,the recognition rates of the 6 and 8 emotions fused by F-DFCC are improved.
作者
何朝霞
朱嵘涛
罗辉
HE Zhaoxia;ZHU Rongtao;LUO Hui(College of Arts and Science,Yangtze University,Jingzhou 434023,China;College of Computer and Control Engineering,Northeast Forestry Univesity,Harbin 150040,China)
出处
《现代电子技术》
北大核心
2024年第6期131-136,共6页
Modern Electronics Technique
基金
国家自然科学基金青年科学基金项目(62101114)
湖北省教育厅科学研究计划指导性项目(B2022474)
荆州市科技局、长江大学文理学院联合基金项目(2023LHX04)。