摘要
为了克服极限学习机输入权重与偏置的随机性对模型泛化能力的负面影响,提出一种基于多目标优化的极限学习机模型选择方法将极限学习机模型选择视为一个多目标全局优化问题,可将泛化误差和输出权重的模作为优化目标;为加快优化速度,引入极限学习机的快速留一法误差估计指代泛化误差,并考虑到优化目标间的互斥性,最终采用多目标综合学习粒子群算法寻找非支配解。通过5个UCI回归数据集上的仿真结果表明,与常用极限学习机模型选择方法相比,改进方法均取得更低的预测误差,同时网络结构更加紧凑。
This paper introduces a new model selection method of ELM based on multi - objective optimization. This method takes ELM model selection as a multi -objective global optimization problem, in which the generaliza- tion error and output weights are as optimization objectives. To accelerate the optimization speed, a fast Leave - one - out (LOO) error estimate of ELM is introduced to refer to the generalization error. Taking into account the contradiction between these two objectives, multi -objective comprehensive learning particle swarm optimization algorithm is utilized to find non - dominated solutions. Experiments for five UCI regression data sets are conducted, the results demonstrate that the proposed algorithm can achieve a lower prediction error with more compact network than the conventional ELM model selection method.
出处
《计算机仿真》
CSCD
北大核心
2014年第8期387-391,共5页
Computer Simulation
基金
国家自然科学基金(U1204609)
河南省基础与前沿技术研究计划项目(132300410430)
关键词
极限学习机
多目标优化
模型选择
Extreme learning machine
Multi - objective optimization
Model selection