期刊文献+

提高神经网络泛化能力的研究 被引量:4

Research to Improve the Generalization Ability of Neural Networks
下载PDF
导出
摘要 从模糊理论的角度出发,提出了一种改进神经网络泛化能力的新方法—“缩放法”。这种方法通过对输入向量的缩放处理,来缩小或模糊化训练样本和新的模式之间的差别,从而使神经网络的泛化能力得以提高。文中提出的新算法—α算法,可以找到合适的缩放因子,进而得到泛化能力更强的新网络。一些实验例证了“缩放法”和α算法的有效性,并从理论上对其进行了初步分析和讨论。 A new approach to improve the generalization ability of neural network is presented based on fuzzy theory. This approach is implemented through shrinking or magnifying the input vector,thereby reducing the difference between training set and testing set.It is called"Shrinking-Magnlfying Approacb"(SMA).At the same time,α-algorithm is presented in order to find out the appropriate shrinking factor α and obtain better generalization ability of neural network.Quite a few experiments serve to study about SMA and α-algorithm.The experiment results are discussed in detail,and the functional pi'inciple of SMA is analyzed in theory.
出处 《计算机工程与应用》 CSCD 北大核心 2006年第4期38-41,共4页 Computer Engineering and Applications
基金 河南省自然科学基金资助项目(编号:0511012500) 教育部科技司资助项目:基于禁忌搜索的模糊神经控制研究
关键词 神经网络 泛化 误识率 模糊理论 Neural Network, generalization, misclassification rate, fuzzy theory
  • 相关文献

参考文献8

  • 1Martin T Hagan,Howard B Demuth,Mark Beale.Neural Network Design[M].Bejing:China Machine Press,CITIC PUBLISHING HOUSE, 2002-08. 被引量:1
  • 2W S Sarle.Stopped training and other remedies for overfitting[C].In: Proceedings of the 27th Symposium on the Interface,1995. 被引量:1
  • 3A S Weigand,D E Rumelhart,B A Huberman.Generalization by weight elimination with application to forecasting[C].In:R P Lippman, J E Moody ,D J Touretzky eds.Advances in Neural Information Processing Systems 3,San Mateo,CA: Morgan Kaufmann, 1991:575-582. 被引量:1
  • 4武妍,王守觉.一种通过反馈提高神经网络学习性能的新算法[J].计算机研究与发展,2004,41(9):1488-1492. 被引量:15
  • 5H Ishibuchi ,M Nii.Fuzzification of input vector for improving the generalization ability of neural networks[C].In :The Int'l Joint Conf on Neural Networks ,Anchorage ,Alaska, 1998. 被引量:1
  • 6D Opitz,R Maclin.Popular Ensemble Methods:an Empirical Study[J]. Journal of Artificial Intelligence Research, 1999; 11 : 169-198. 被引量:1
  • 7J S R Jang,C T Sun,E Mizutani.Neuro-Fuzzy and Soft Computing[M]. Upper Saddle River,NJ:Prentice-Hall Inc,Simon & Schuster/A Viacom Company, 1997. 被引量:1
  • 8D F Specht.Probabilistic neural networks[J].Neural Networks, 1990;3(1): 109-118. 被引量:1

二级参考文献11

  • 1E Trentin. Networks with trainable amplitude of activation functions. Neural Networks, 2001, 14(4-5): 471~493 被引量:1
  • 2K Eom, K Jung, H Sirisena. Performance improvement of backpropagation algorithm by automatic activation function gain tuning using fuzzy logic. Neurocomputing, 2003, 50: 439~460 被引量:1
  • 3A Gupta, S M Lam. Weight decay backpropagation for noisy data. Neural Networks, 1998, 11(6): 1127~1137 被引量:1
  • 4Y H Zweiri, J F Whidborne, L D Seneviratne. A three-term backpropagation algorithm. Neurocomputing, 2003, 50: 305~318 被引量:1
  • 5B L Lu, H Kita, Y Nishikawa. Inverting feedforward neural networks using linear and nonlinear programming. IEEE Trans on Neural Networks, 1999, 10(6): 1271~1290 被引量:1
  • 6J N Hwang, J J Choi, S Oh, et al. Query based learning applied to partially trained multilayer perceptrons. IEEE Trans on Neural Networks, 1991, 2(1): 131~136 被引量:1
  • 7H Ishibuchi, M Nii. Fuzzification of input vector for improving the generalization ability of neural networks. The Int'l Joint Conf on Neural Networks, Anchorage, Alaska, 1998 被引量:1
  • 8X H Yu, G A Chen. Efficient backpropagation learning using optimal learning rate and momentum. Neural Network, 1997, 10(3): 517~527 被引量:1
  • 9G D Magoulas, V P Plagiananakos, M N Vrahatis. Globally convergent algorithms with local learning rates. IEEE Trans on Neural Networks, 2002, 13(3): 774~779 被引量:1
  • 10J Y F Yam, T W S Chow. A weight initialization method for improving training speed in feedforward neural network. Neurocomputing, 2000, 30(1-4): 219~232 被引量:1

共引文献14

同被引文献42

引证文献4

二级引证文献11

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部