摘要
目前人工神经网络模型学习与训练时间长,收敛性难以保证,网络鲁棒性差,存在局部极小。尤其当网络规模增大时,上述缺点变得尤为严重。为克服上述缺陷,提出了一组新的方法,即利用训练样本集提供的全局知识,通过不同数学工具,设计网络的结构与参数。结果表明,这套方法可以达到很好的效果,是改进人工神经网络性能的一种有效途径。
There are several common drawbacks in most of recent neural models, such as, high learning complexity, no theorem about their convergence, low error tolerance, local minima, etc. When the size of neural networks increases, the disadvantages become severe. Therefore, these prohibit the networks scaling up to real problems. In order to overcome the limitations, using global knowledge embedded in training samples, a set of design approaches for network structure and its parameters based on different mathematical methods are presented. The analysis and experimental results show that the methods are efficient and one of the right ways of improving recent neural models.
出处
《清华大学学报(自然科学版)》
EI
CAS
CSCD
北大核心
1998年第S1期4-7,共4页
Journal of Tsinghua University(Science and Technology)
基金
国家"八六三"高技术项目
国家自然科学基金重点实验室专项
关键词
人工神经网络
构造法
前向传播法
规划方法
权-阈值神经元模型
概率逻辑神经元
artificial neural networks
constructive algorithm
feed forward propagation
programming methods
weight and sum threshold neural models
probabilistic logic neuron