期刊文献+

多输出神经元模型的MFNN带正则化因子RLS算法 被引量:1

A REGULARIZER FOR RLS ALGORITHM IN MFNN WITH MULTIOUTPUT NEURAL MODEL
下载PDF
导出
摘要 在神经网络的学习中,将递推最小二乘算法(RLS)与正则化因子相结合,一方面,可以提高网络的泛化能力,另一方面,对学习样本的噪声具有鲁棒性。但是,当网络规模较大时,该算法每迭代一步计算复杂度和存储量要求很大。本文将带正则化因子的RLS算法应用于多输出神经元模型的多层前向神经网络,通过仿真实验,结果表明,本方法可以大大简化网络结构,减小每迭代一步计算的复杂度和存储量。 Recursive least squares(RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered networks(MFNN). Regularizer can improve the generalization of the trained networks. Used RLS methods together with the regularizer, the generalization ability and convergent speed are improved. However ,this algorithm achieves better performance at the expense of much greater computational and storage requirements. In this paper, RLS with regularizer is used for training MO-MFNN. By several simulations,it is proved that the modified methods can improve computational complexity and storage requirements and the generalization ability of the networks.
机构地区 三峡大学理学院
出处 《计算机应用与软件》 CSCD 北大核心 2005年第11期102-104,共3页 Computer Applications and Software
关键词 神经网络 多输出神经元模型 最小二乘算法 正则化因子 泛化能力 Multi-output neural model Regularizer RLS algorithm Generalization
  • 相关文献

参考文献8

二级参考文献15

  • 1吴佑寿.利用输入信号先验知识构造某些分类神经网络的研究[J].中国科学(E辑),1996,26(2):140-144. 被引量:5
  • 2Segee B E. Using spectral techniques for improved performance in ANN. Proc IEEE, 1993, 28(1): 500-505. 被引量:1
  • 3Lee S, Kil R M. A Gaussian potential function network with hierarchnically self-organizing learning. Neural Networks, 1991,4:207-224. 被引量:1
  • 4Stork D G, Allen J D. How to solve the N-bit parity problem with two hidden units. Neural Networks, 1992, 5:923-926. 被引量:1
  • 5Stock D G. A replay to Brown and Kom. Neural Networks, 1993, 6:607-609. 被引量:1
  • 6Wu Y S. A new approach to design a simplest ANN for performing certain specific problems. In: Proceedings of the International Conference on Neural Information Processing, Beijing, 1995. 477-480. 被引量:1
  • 7Wu Youshou, Zhao Mingsheng, Ding Xiaoqing. A new perceptron model with tunable activation function. Chinese Journal of Eletronics, 1996, 5(2): 55-62. 被引量:1
  • 8Scalero R S, Tepedelenioglu N. A fast new algorithm for training feedforword neural networks. IEEE Trans On Signal Processing, 1992, 40(2): 202-210. 被引量:1
  • 9Azimi-Sadjadi M R, Liou R J. Fast learning processing of multilayer neural network using recursive learning squares method.IEEE Trans on Signal Processing, 1992, 40(2): 447-450. 被引量:1
  • 10吴佑寿,赵明生,丁晓青.一种激励函数可调的新人工神经网络及应用[J].中国科学(E辑),1997,27(1):55-60. 被引量:26

共引文献32

同被引文献7

引证文献1

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部