摘要
基于梯度的算法训练带反馈多层感知网络(recurrent MLP,RMLP)时,必须先计算网络输出层内状态向量对所有可调参数的动态导数,而文献[Puskorius等1992,1994年]给出的动态导数计算公式存在计算量大和存储空间需求高的问题。本文给出计算动态导数的新方法,与文献方法相比,本文方法能显著减少计算量和存储空间。解耦合的扩展Kalman滤波算法(Decoupled Extended Kalman Filter,DEKF)是一种结合梯度和Kalman滤波的高效RMLP训练算法。分别用本文方法和文献方法计算动态导数,再用DEKF调节网络权参数,仿真表明两种情况下训练得到的网络具有同样的性能。
Dynamic derivatives of the state vector in the output layer with respect to all the weights and biases should be first calculated before applying derivative-based training algorithms of RMLP. The formulas of dynamic derivatives in the reference [Puskorius,etc 1994] face the difficulty of heavy calculation and high buffer demand. In this paper, new formulas of dynamic derivatives are presented with fewer calculation quantum and lower buffer demand than the former. DEKF is an efficient algorithm in training RMLP by combining the derivative information and the Kalman filtering procedure. Using the new formulas and the former before the adjustment of the network weights by DEKF respectively, simulation indicates that the two sets of the trained network weights have comparable performance.
出处
《信号处理》
CSCD
2004年第5期456-460,共5页
Journal of Signal Processing