摘要
标准BP算法采用的最陡梯度下降法使得均方误差达到最小的策略可能存在两大问题:①陷入局部最小而没有收敛到全局最小,即不收敛;②收敛速率慢。本文从训练算法角度方面,比较了标准BP算法、动量算法、可变学习速率算法和Levenberg Marquardt算法这几种方法的收敛性以及收敛速率,并通过Matlab仿真进行了验证。
<Abstrcat>Standard backpropagation, which adopts the steepest gradient descent algorithm to acquire the least mean square (LMS), probably leads to two problems. One is to be trapped in local minimum, not global minimum. The other is low convergence rate. In aspect of training algorithm, this paper compares standard backpropagation with momentum algorithm, variable learning rate algorithm and Levenberg-Marquardt algorithm by the convergence and convergence rate of neural network. This paper also provides simulation examples for all methods above.
出处
《计算机与现代化》
2005年第7期9-12,共4页
Computer and Modernization