摘要
提出了一种新的快速的误差反向传播算法 .这种方法从神经网络的权值调节公式入手 ,通过避免过早饱和、加大权值调节的幅度等手段来加快收敛 .并通过对两个奇偶问题、一个函数逼近问题的仿真 ,验证了所提出的算法的有效性 .结果表明 ,所提出的算法在收敛速度等方面大大优于通常的BP(反向传播 )算法、带动量项的BP算法以及其他的一些改进的算法 .
In this paper,in order to effectively speed up the weight convergence speed of neural network,a new back-propagation algorithm is proposed. Starting with the weight adjusting equation,the proposed algorithm speeds up the convergence speed by avoiding early saturation and by amplifying the range of weight adjusting.The efficacy of the proposed algorithm is verified by means of simulations on two parity problems,and a function approximation problem.The results show that the proposed algorithm improves considerably on the convergence speed of the general back-propagation algorithms,including when using momentum,and other improved algorithms.
出处
《同济大学学报(自然科学版)》
EI
CAS
CSCD
北大核心
2004年第8期1092-1095,共4页
Journal of Tongji University:Natural Science
基金
国家自然科学基金资助项目 (60 13 5 0 10 )
关键词
神经网络
反向传播
学习算法
neural network
back-propagation
learning algorithm