期刊文献+

基函数神经网络逼近能力探讨及全局收敛性分析 被引量:7

Approximation-Performance and Global-Convergence Analysis of Basis-Function Feedforward Neural Network
下载PDF
导出
摘要 构建一类新型基函数神经网络。依据梯度下降法思想,给出该神经网络的权值迭代公式,证明迭代序列能全局收敛到网络的最优权值,并由此推导出基于伪逆的最优权值一步计算公式——简称为权值直接确定法。理论分析表明,该新型神经网络具有最佳均方逼近能力和全局收敛性质,其权值直接确定法避免了冗长迭代计算、易陷于局部极小点、学习率难选取等传统BP神经网络难以解决的难题,仿真验证显示其相对BP神经网络的各种改进算法具有运算速度快、计算精度高等优势,且对于噪声有良好的滤除特性。 Constructs a new type of basis-function feedforward neural network. The weights-updating formula for the constructed neural network is derived based on the gradient-descent method, with global-convergence and least-squares approximation explored then. Moreover, presents a weights--direct--determination method by using pseudoinverse. Theoretical analysis demonstrates that the constructed neural network could remedy the weakness of conventional BP neural networks, such as, the local-minima phenomenon and difficulty of choosing learning rate, as the weights-direct-determination method could obtain the optimal weights just in one step without any lengthy BP iterative-training. Computer-simulation results substantiate the advantages of our neural network and its weights-direct-determination method, in the sense of speedy computation and high precision.
出处 《现代计算机》 2009年第2期4-8,共5页 Modern Computer
基金 国家自然科学基金(No60643004 60775050) 中山大学科研启动费 后备重点课题资助
关键词 基函数神经网络 权值直接确定 全局收敛性 逼近性能 Basis-Function Neural Network Weights Direct Determination Global Convergence Approximation Performance
  • 相关文献

参考文献10

二级参考文献18

  • 1E Trentin. Networks with trainable amplitude of activation functions. Neural Networks, 2001, 14(4-5): 471~493 被引量:1
  • 2K Eom, K Jung, H Sirisena. Performance improvement of backpropagation algorithm by automatic activation function gain tuning using fuzzy logic. Neurocomputing, 2003, 50: 439~460 被引量:1
  • 3A Gupta, S M Lam. Weight decay backpropagation for noisy data. Neural Networks, 1998, 11(6): 1127~1137 被引量:1
  • 4Y H Zweiri, J F Whidborne, L D Seneviratne. A three-term backpropagation algorithm. Neurocomputing, 2003, 50: 305~318 被引量:1
  • 5B L Lu, H Kita, Y Nishikawa. Inverting feedforward neural networks using linear and nonlinear programming. IEEE Trans on Neural Networks, 1999, 10(6): 1271~1290 被引量:1
  • 6J N Hwang, J J Choi, S Oh, et al. Query based learning applied to partially trained multilayer perceptrons. IEEE Trans on Neural Networks, 1991, 2(1): 131~136 被引量:1
  • 7H Ishibuchi, M Nii. Fuzzification of input vector for improving the generalization ability of neural networks. The Int'l Joint Conf on Neural Networks, Anchorage, Alaska, 1998 被引量:1
  • 8X H Yu, G A Chen. Efficient backpropagation learning using optimal learning rate and momentum. Neural Network, 1997, 10(3): 517~527 被引量:1
  • 9G D Magoulas, V P Plagiananakos, M N Vrahatis. Globally convergent algorithms with local learning rates. IEEE Trans on Neural Networks, 2002, 13(3): 774~779 被引量:1
  • 10J Y F Yam, T W S Chow. A weight initialization method for improving training speed in feedforward neural network. Neurocomputing, 2000, 30(1-4): 219~232 被引量:1

共引文献121

同被引文献51

引证文献7

二级引证文献20

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部