摘要
在神经网络的学习中,将递推最小二乘算法(RLS)与正则化因子相结合,一方面,可以提高网络的泛化能力,另一方面,对学习样本的噪声具有鲁棒性。但是,当网络规模较大时,该算法每迭代一步计算复杂度和存储量要求很大。本文将带正则化因子的RLS算法应用于多输出神经元模型的多层前向神经网络,通过仿真实验,结果表明,本方法可以大大简化网络结构,减小每迭代一步计算的复杂度和存储量。
Recursive least squares(RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered networks(MFNN). Regularizer can improve the generalization of the trained networks. Used RLS methods together with the regularizer, the generalization ability and convergent speed are improved. However ,this algorithm achieves better performance at the expense of much greater computational and storage requirements. In this paper, RLS with regularizer is used for training MO-MFNN. By several simulations,it is proved that the modified methods can improve computational complexity and storage requirements and the generalization ability of the networks.
出处
《计算机应用与软件》
CSCD
北大核心
2005年第11期102-104,共3页
Computer Applications and Software