摘要
正则化极限学习机RELM是一种单隐层前馈神经网络,不同于传统神经网络算法,RELM通过随机设置输入层权重和偏置值,可以快速求得输出层权重,并且引入正则化因子,能够提高模型的泛化能力。针对文本信息高维度、多类别的问题,提出一种基于快速自编码的正则化极限学习机FARELM。将由RELM改进后的快速自编码神经网络对样本进行无监督特征学习,并对特征提取后的数据使用RELM进行分类。实验表明,FA-RELM的学习速度和分类精度较优。
The regularized extreme learning machine (RELM) is a kind of single-hidden layer feed forward neural networks (SLFNs). Unlike the traditional neural network algorithm, the RELM ran- domly chooses input weights and bias of hidden nodes, and analytically determines the output weights, Besides, the introduction of the regularized factor can improve the generalization ability of the model. Aiming at the problem of high-dimension and multi-class of text information, we propose a novel RELM based on fast auto-encoder (FA-RELM). The fast auto-encoder neural network improved by the RELM is used for unsupervised feature learning, and the RELM is used to classify the data after feature extrac- tion. Experimental results show that the FA-RELM can achieve faster learning speed and better classifi- cation accuracy.
出处
《计算机工程与科学》
CSCD
北大核心
2016年第5期871-876,共6页
Computer Engineering & Science
基金
国家自然科学基金(61027005)
关键词
文本分类
特征提取
自动编码器
正则化极限学习机
text classification
feature extraction
auto-encoder
regularized extreme learning machine