摘要
该文提出了一种改进的K-最近邻分类算法。该算法首先将训练事例集中的每一类样本进行聚类,既减小了训练事例集的数据量,又去除了孤立点,大大提高了算法的快速性和预测精度,从而使该算法适用于海量数据集的情况。同时,在算法中根据每个属性对分类贡献的大小,采用神经网络计算其权重,将这些属性权重用在最近邻计算中,从而提高了算法的分类精度。在几个标准数据库和实际数据库上的实验结果表明,该算法适合于对复杂而数据量比较大的数据库进行分类。
This paper presents a improved K.-NN algorithm. The CURE clustering is carried out to select the subset of the training set. It can reduce the volume of the training set and omit the outlier. Therefore it can lead both to computational efficiency and to higher classification accuracy. In the algorithm, the weights of each feature are learned using neural network. The feature weights are used in the nearest measure computation such that the important features contribute more in the nearest measure. Experiments on several UCI databases and practical data sets show the efficiency of the algorithm.
出处
《电子与信息学报》
EI
CSCD
北大核心
2005年第3期487-491,共5页
Journal of Electronics & Information Technology
基金
国家自然科学基金(60275020)河北省教委基金(2002269)资助课题
关键词
K-最近邻
聚类
权值调整
分类
K-nearest neighbor, Cluster, Weight adjustment, Classification