期刊文献+

一种提高神经网络泛化能力的新方法 被引量:5

A New Approach to Improve the Generalization Ability of Neural Network
下载PDF
导出
摘要 提出了一种改进神经网络泛化能力的新方法——“缩放法”。这种方法通过对输入向量的缩放处理,来缩小或模糊化训练样本和新的模式之间的差别,从而使神经网络的泛化能力得以提高。文中提出的新算法——α算法,可以找到合适的缩放因子,进而得到泛化能力更强的新网络。一些实验例证了“缩放法”和α算法的有效性,并从理论上对其进行了分析和讨论。实验和分析表明,这种方法简单可靠,对许多神经网络和模式分类问题效果明显。 A new approach to improve the generalization ability of neural network is presented based on an angle of fuzzy theory. This approach is put into effect through shrinking or magnifying the input vector, thereby reducing the difference between training set and testing set. It is called "Shrinklng-Magnifying Approach"(SMA). At the same time in this paper, a new algorithm--α-algorithm is presented in order to find out the appropriate shrinking factor a and obtain better generalization ability of neural network. Quite a few simulation experiments serve to study about SMA andaalgorithm. The results of experiments and analyses show that the new approach is not only simpler and easier, but also is applicable to many neural networks and many classification problems.
出处 《计算机科学》 CSCD 北大核心 2006年第2期201-204,共4页 Computer Science
基金 河南省自然科学资助项目(0511012500) 教育部科技司资助项目"基于禁忌搜索的模糊神经控制研究"。
关键词 神经网络 泛化 误识率 模糊理论 Neural network, Generalization, Misclassifieation rate, Fuzzy theory
  • 相关文献

参考文献8

  • 1Sarle W S.Stopped training and other remedies for overfitting.In:Proc.of the 27th Symposium on the Interface,1995 被引量:1
  • 2Hinton G E.Connectionist learning procedures.Artificial Intelligence,1989,40:185~234 被引量:1
  • 3武妍,王守觉.一种通过反馈提高神经网络学习性能的新算法[J].计算机研究与发展,2004,41(9):1488-1492. 被引量:15
  • 4Ishibuchi H,Nii M.Fuzzification of input vector for improving the generalization ability of neural networks.In:The Int'l Joint Conf.on Neural Networks,Anchorage,Alaska,1998 被引量:1
  • 5Hansen L K,Salamon P.Neural Network Ensembles.IEEE Transactions on Pattern Analysis and Machine Intelligence,1990,12(10):993~1001 被引量:1
  • 6Specht D F.Probabilistic neural networks.Neural Networks,1990,3(1):109~ 118 被引量:1
  • 7Jang J S R,Sun C T,Mizutani E.Neuro-Fuzzy and Soft Computing.Upper Saddle River,NJ:Prentice-Hall Inc,Simon & Schuster/A Viacom Company,1997 被引量:1
  • 8冯乃勤.模糊概念的模糊度研究[J].模式识别与人工智能,2002,15(3):290-294. 被引量:6

二级参考文献14

  • 1李德毅,孟海军,史雪梅.隶属云和隶属云发生器[J].计算机研究与发展,1995,32(6):15-20. 被引量:1231
  • 2E Trentin. Networks with trainable amplitude of activation functions. Neural Networks, 2001, 14(4-5): 471~493 被引量:1
  • 3K Eom, K Jung, H Sirisena. Performance improvement of backpropagation algorithm by automatic activation function gain tuning using fuzzy logic. Neurocomputing, 2003, 50: 439~460 被引量:1
  • 4A Gupta, S M Lam. Weight decay backpropagation for noisy data. Neural Networks, 1998, 11(6): 1127~1137 被引量:1
  • 5Y H Zweiri, J F Whidborne, L D Seneviratne. A three-term backpropagation algorithm. Neurocomputing, 2003, 50: 305~318 被引量:1
  • 6B L Lu, H Kita, Y Nishikawa. Inverting feedforward neural networks using linear and nonlinear programming. IEEE Trans on Neural Networks, 1999, 10(6): 1271~1290 被引量:1
  • 7J N Hwang, J J Choi, S Oh, et al. Query based learning applied to partially trained multilayer perceptrons. IEEE Trans on Neural Networks, 1991, 2(1): 131~136 被引量:1
  • 8H Ishibuchi, M Nii. Fuzzification of input vector for improving the generalization ability of neural networks. The Int'l Joint Conf on Neural Networks, Anchorage, Alaska, 1998 被引量:1
  • 9X H Yu, G A Chen. Efficient backpropagation learning using optimal learning rate and momentum. Neural Network, 1997, 10(3): 517~527 被引量:1
  • 10G D Magoulas, V P Plagiananakos, M N Vrahatis. Globally convergent algorithms with local learning rates. IEEE Trans on Neural Networks, 2002, 13(3): 774~779 被引量:1

共引文献19

同被引文献33

引证文献5

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部