摘要
针对支持向量机(SVM)在应用于集成学习中会失效的问题,提出一种选择性SVM集成学习算法(SE-SVM),利用ξα误差估计法估计个体SVM泛化性度量,并基于负相关学习理论引入差异性度量,通过递归删除法选择出一组泛化性能优良、相互间差异性大的SVM参与集成学习.基于UCI数据的仿真实验表明,SE-SVM能够平均提高SVM的分类正确率0.4%,比常规的Bag-ging集成学习方法和负相关集成学习方法的分类正确率分别提高了0.24%和0.16%.
Focusing on the problem that conventional ensemble learning methods may be invalid when support vector machine (SVM) is used as component learner, a new selective SVM ensemble algorithm is proposed. & estimator is used to estimate the generalization performance of the component SVM, and negative learning theory is used to introduce diversity among component SVMs. A set of component SVMs with high generalization performance and high diversity is selected during ensemble through recursive elimination algorithm. Experimental results on UCI data sets show that compared with single SVM, conventional Bagging ensemble method and negative learning ensemble method, the classification accuracy of selective SVM ensemble is increased on average by 0. 4%, 0. 24% and 0. 16 %, respectively.
出处
《西安交通大学学报》
EI
CAS
CSCD
北大核心
2008年第10期1221-1225,共5页
Journal of Xi'an Jiaotong University
基金
国家高技术研究发展计划资助项目(2006AA09A102-11)
国家自然科学基金重点资助项目(40730424)
关键词
泛化性度量
集成学习
负相关
支持向量机
generalization measurement
ensemble learning
negative correlation
support vector machine