摘要
从模糊理论的角度出发,提出了一种改进神经网络泛化能力的新方法—“缩放法”。这种方法通过对输入向量的缩放处理,来缩小或模糊化训练样本和新的模式之间的差别,从而使神经网络的泛化能力得以提高。文中提出的新算法—α算法,可以找到合适的缩放因子,进而得到泛化能力更强的新网络。一些实验例证了“缩放法”和α算法的有效性,并从理论上对其进行了初步分析和讨论。
A new approach to improve the generalization ability of neural network is presented based on fuzzy theory. This approach is implemented through shrinking or magnifying the input vector,thereby reducing the difference between training set and testing set.It is called"Shrinking-Magnlfying Approacb"(SMA).At the same time,α-algorithm is presented in order to find out the appropriate shrinking factor α and obtain better generalization ability of neural network.Quite a few experiments serve to study about SMA and α-algorithm.The experiment results are discussed in detail,and the functional pi'inciple of SMA is analyzed in theory.
出处
《计算机工程与应用》
CSCD
北大核心
2006年第4期38-41,共4页
Computer Engineering and Applications
基金
河南省自然科学基金资助项目(编号:0511012500)
教育部科技司资助项目:基于禁忌搜索的模糊神经控制研究
关键词
神经网络
泛化
误识率
模糊理论
Neural Network, generalization, misclassification rate, fuzzy theory