期刊文献+

基于深度学习的文本分类研究综述 被引量:1

Review of Text Classification Research Based on Deep Learning
下载PDF
导出
摘要 与传统的机器学习模型相比,深度学习模型试图模仿人的学习思路,通过计算机自动进行海量数据的特征提取工作。文本分类是自然语言处理中的一个重要应用,在文本信息处理过程中具有关键作用。过去几年,使用深度学习方法进行文本分类的研究激增并取得了较好效果。文中简要介绍了基于传统模型的文本分类方法和基于深度学习的文本分类方法,回顾了先进文本分类方法并重点关注了其中基于深度学习的模型,对近年来用于文本分类的深度学习模型的研究进展以及成果进行介绍和总结,并对深度学习在文本分类领域的发展趋势和研究的难点进行了总结和展望。 Compared with traditional machine learning models,deep learning models attempts to imitate human learning ideas and automatically perform feature extraction from massive data through computers.Text classification is an important application in natural language processing and plays a key role in text information processing.In the past few years,research on text classification has surged and achieved good results.This study briefly introduces text classification methods based on traditional models and deep learning models,and reviews advanced text classification methods,with a focus on models for deep learning.The deep learning methods,research progress and achievements used in text classification in recent years are introduced and summarized,and the development trend of deep learning in the field of text classification and the difficulties are summarized and prospected in this study.
作者 汪家伟 余晓 WANG Jiawei;YU Xiao(School of Cyber Science and Engineering,Southeast University,Nanjing 210096,China;School of Continuing Education,Southeast University,Nanjing 210096,China)
出处 《电子科技》 2024年第1期81-86,共6页 Electronic Science and Technology
基金 中国高校产学研创新基金(2020ITA07007)。
关键词 深度学习 自然语言处理 文本分类 机器学习 神经网络 预训练模型 注意力机制 长短期记忆网络 deep learning natural language processing text classification machine learning neural networks pre-trained model attention mechanism long short-term memory
  • 相关文献

参考文献5

二级参考文献76

  • 1LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition [J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. 被引量:1
  • 2HINTON G E, OSINDERO S, TEH Y W. A fast learning algorithm for deep belief nets [J]. Neural Computation, 2006, 18(7): 1527-1554. 被引量:1
  • 3LEE H, GROSSE R, RANGANATH R, et al. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations [C]// ICML '09: Proceedings of the 26th Annual International Conference on Machine Learning. New York: ACM, 2009: 609-616. 被引量:1
  • 4HUANG G B, LEE H, ERIK G. Learning hierarchical representations for face verification with convolutional deep belief networks [C]// CVPR '12: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2012: 2518-2525. 被引量:1
  • 5KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks [C]// Proceedings of Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2012: 1106-1114. 被引量:1
  • 6GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2014: 580-587. 被引量:1
  • 7LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2015: 3431-3440. 被引量:1
  • 8SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. [2015-11-04]. http://www.robots.ox.ac.uk:5000/~vgg/publications/2015/Simonyan15/simonyan15.pdf. 被引量:1
  • 9SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2015: 1-8. 被引量:1
  • 10HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [EB/OL]. [2016-01-04]. https://www.researchgate.net/publication/286512696_Deep_Residual_Learning_for_Image_Recognition. 被引量:1

共引文献754

同被引文献9

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部