期刊文献+

融合注意力机制和CNN-GRNN模型的读者情绪预测 被引量:6

Attention-based convolutional-gated recurrent neural network for reader's emotion prediction
下载PDF
导出
摘要 针对主流面向文本的读者情绪预测算法难以捕捉文本中复杂的语义和语法信息,以及局限于使用多标签分类方法的问题,提出一种融合注意力机制和卷积门限循环神经网络的读者情绪预测方法。该方法将文本划分为多个句子,利用卷积神经网络从每个句子中提取不同粒度的n-gram信息,构建句子级别的特征表示;然后通过门限循环神经网络顺序地集成这些句子特征,并利用注意力机制自适应地感知上下文信息提取影响读者情绪的文本特征;最后利用softmax回归进行细粒度的读者情绪分布预测。在雅虎新闻读者情感分析数据集上的实验结果证明了该方法的有效性。 The past reader's emotion prediction methods are unable to capture the complex semantic and grammatical information in the document, and have mostly used in multi-label classification technology, which limit its development and application. To solve this problem, an improved method named attention-based convolutional-gated recurrent neural network for reader's emotion prediction is presented. This method firstly divides the document into several sentences, and then adopts convolutional neural network to produce sentence-level representations from word vectors. Such useful sentence features can be sequentially integrated using gated recurrent neural network, and a novel attention mechanism is proposed to build a document-level representation according to their contribution to reader's emotion prediction. Finally, a softmax regression is applied to predict reader's emotion distributions. Experimental results on Yahoo news corpus demonstrate that the proposed method achieves better accuracy compared with state-of-the-art methods.
作者 张琦 彭志平 ZHANG Qi;PENG Zhiping(School of Computers,Guangdong University of Technology,Guangzhou 510006,China;School of Computer and Electronic Information,Guangdong University of Petrochemical Technology,Maoming,Guangdong 525000,China)
出处 《计算机工程与应用》 CSCD 北大核心 2018年第13期168-174,共7页 Computer Engineering and Applications
基金 国家自然科学基金(No.61272382 No.61672174) 广东省科技计划项目(No.2015B020233019)
关键词 情感分析 读者情绪预测 卷积神经网络 门限循环神经网络 注意力机制 sentiment analysis reader' s emotion prediction convolutional neural network gated recurrent neural network attention mechanism
  • 相关文献

参考文献3

二级参考文献34

  • 1Bengio Y, Ducharme R, Vincent P, et al. A neural probabilistic language model. The Journal of Ma- chine Learning Research, 2003, 3; 1137-1155. 被引量:1
  • 2Mikolov T, Karaficit M, Burget L, et al. Recurrent neural network based language model[C]//Proceed- ings of the llth Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010. 2010. 1045-1048. 被引量:1
  • 3Socher R, Pennington J, Huang E H, et al. Semi-su- pervised recursive autoencoders for predicting senti- ment distributions[C]//Proeeedings of the Conference on Empirical Methods in Natural Language Process- ing. Association for Computational Linguistics, 2011:151-161. 被引量:1
  • 4Hochreiter S, Bengio Y, Frasconi P, et al. Gradient flow in recurrent nets: the difficulty of learning long- term dependencies M. Wiley-IEEE Press, 2001: 237-243. 被引量:1
  • 5Hochreiter S, Schmidhuber J. Long short-term memo- ry. Neural computation, 1997, 9(8): 1735-1780. 被引量:1
  • 6Socher R, Lin C C, Manning C, et al. Parsing natural scenes and natural language with recursive neural net- works[C//Proceedings of the 28th international con- ference on machine learning (ICML-11). 2011 : 129- 136. 被引量:1
  • 7Socher R, Perelygin A, Wu J Y, et al. Recursive deep models for semantic compositionality over a sentiment treebankC//Proceedings of the conference on empiri- cal methods in natural language processing (EMNLP). 2013 : 1631-1642. 被引量:1
  • 8Irsoy O, Cardie C. Deep Recursive Neural Networks for Compositionality in Language[-C//Proeeedings of the Advances in Neural Information Processing Sys- tems. 2014:2096 -2104. 被引量:1
  • 9Li P, Liu Y, Sun M. Recursive Autoencoders for ITG-Based Translation[C]//Proceedings of the EMN- LP. 2013: 567-577. 被引量:1
  • 10Le P, Zuidema W. Inside-Outside Semantics: A Framework for Neural Models of Semantic Composi tlon[C]//Proceeding of the Deep Learning and Rep- resentation Learning Workshop: NIPS 2014. 被引量:1

共引文献95

同被引文献33

引证文献6

二级引证文献51

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部