摘要
提出了一种联想记忆神经网络的优化训练方案,说明网络的样本吸引域可用阶深参数作一定程度的控制,使网络具有尽可能好的容错性.计算表明,训练网络可达到α 1(α=M/N,N是神经元数,M是贮存样本数),而仍有良好的容错性,明显优于外积法、正交化外积法、赝逆法等常用方案.文中还对训练网络的对称性与收敛性问题进行了讨论.
In this paper, an optimized training scheme of neural network for associative memory is proposed. We show that the basins of attraction for samples attractors can be controlled in some extent by a pitfall depth parameter, therefore, the faulttolerance of network can be made as good as possible. Numerical simulations show that with this scheme, the capacity of network can reach α 1 (α = M/N, here N is the number of neurons and M is the number of stored samples) and still with good fault-tolerance. The results are much better than the popular schemes such as outer-product scheme, orthogonalized outer-product scheme, pseudo-inverse matrix scheme and etc.. The problems on symmetry and convergence of trained networks are discussed too.
出处
《自动化学报》
EI
CSCD
北大核心
1995年第6期641-648,共8页
Acta Automatica Sinica
基金
国家非线性科学攀登项目
关键词
神经网络
联想记忆
容错性
优化训练
Neural network, associative memory, fault-tolerance, attractor,basin of attraction.