摘要
BP算法通过迭代地处理一组训练样本,将每个样本的实际输出与期望输出比较,不断调整神经网络的权值和阈值,使网络的均方差最小。BP算法的有效性在某种程度上依赖于学习率的选择,由于标准BP算法中学习率固定不变,因此其收敛速度慢,易陷入局部极小值。针对此问题,通过分析BP神经网络的误差曲面可知,在误差曲面平坦区域需要有较大的学习率,在误差变化剧烈的区域需要有较小的学习率,从而加快算法的收敛速度,避免陷入局部极小值。实验结果表明,自适应学习率BP算法的收敛速度明显快于固定学习率的标准BP算法。
BP algorithm lets the Means Square Error of the BP network is minimum, by iteratively processing a set of training samples, comparing the actual output with desired output of each sample, and constantly adjusting the weights and thresholds of the neural network. The validity of BP algorithm depends on the choice of the learning rate to some extent, as the learning rate of the standard BP algorithm is fixed, so the convergence is slow and easily trapped into local minima. For this problem, by analyzing the error surface of BP neural network, we can find that in the flat region of the error surface requires a larger learning rate, in gradient area of the error surface requires a smaller learning rate, thus can speed up the convergence speed, avoid falling into local minima. Experimental results show that adaptive learning rate BP algorithm converges significantly faster than the standard BP algorithm.
出处
《大众科技》
2011年第12期16-18,共3页
Popular Science & Technology
基金
中南民族大学自然科学基金资助项目(No.YZQ09003)
关键词
神经网络
BP算法
学习率
收敛速度
neural network
back propagation algorithm
learning rate
convergence speed