期刊文献+

基于生成对抗网络联合训练的语音分离方法 被引量:3

Speech Separation Method Based on Cooperative Training of Generative Adversarial Network
下载PDF
导出
摘要 基于深度神经网络的语音分离方法大都在频域上进行训练,并且在训练过程中往往只关注目标语音特征,不考虑干扰语音特征。为此,提出了一种基于生成对抗网络联合训练的语音分离方法。该方法以时域波形作为网络输入,保留了信号时延导致的相位信息。同时,利用对抗机制,使生成模型和判别模型分别训练目标语音和干扰语音的特征,提高了语音分离的有效性。实验中,采用Aishell数据集进行对比测试。结果表明,本文所提方法在三种信噪比条件下都有良好的分离效果,能更好地恢复出目标语音中的高频频段信息。 Most speech separation methods based on deep neural networks are trained in frequency domain,and in the process of training,they usually only focus on the features of target speech,without considering the features of interference speech.For this reason,a speech separation method based on cooperative training of generative adversarial network is proposed.This method takes the time-domain waveform as the network’s input and retains the phase information caused by the signal delay.At the same time,the generative model and discriminative model are used to train the features of the target speech and the interference speech respectively,which improves the effectiveness of speech separation.In the experiment,a comparative test is performed on the Aishell data set.The results show that the proposed method has a good separation effect under three SNR conditions,and can better recover the high frequency band information of the target speech.
作者 王涛 全海燕 Wang Tao;Quan Haiyan(Faculty of Information Engineering and Automation,Kunming University of Science and Technology,Kunming,Yunnan 650500,China)
出处 《信号处理》 CSCD 北大核心 2020年第6期1013-1019,共7页 Journal of Signal Processing
基金 国家自然科学基金(41364002)。
关键词 语音分离 时域波形 生成对抗网络 联合训练 speech separation time-domain waveform generative adversarial network cooperative training
  • 相关文献

参考文献4

二级参考文献6

共引文献24

同被引文献8

引证文献3

二级引证文献17

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部