设函数π:R^n→R定义如下: π:(x_1,…,x_n)→multiply from k=1 to n(x_k)。 近来,用一个一元连续函数夕与函数冗的复合旷冗来逼近多元连续问题。由于其在人工神经网络中的重要应用,而受到广泛的关注。 事实上,所谓Sigma-Pi神经网络的...设函数π:R^n→R定义如下: π:(x_1,…,x_n)→multiply from k=1 to n(x_k)。 近来,用一个一元连续函数夕与函数冗的复合旷冗来逼近多元连续问题。由于其在人工神经网络中的重要应用,而受到广泛的关注。 事实上,所谓Sigma-Pi神经网络的能力,从数学上讲。展开更多
Let the function n: R^R→R be defined by Recently, approximating continuous function of several variables by the composition and superposition 9oπ of the function π and a function of one variable g is arousing great...Let the function n: R^R→R be defined by Recently, approximating continuous function of several variables by the composition and superposition 9oπ of the function π and a function of one variable g is arousing great attention due to its applications in artificial neural networks (cf. references 1—3).展开更多
In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficien...In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficient is chosen in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved.展开更多
基金Project supported in part by the Natural Science Foundation of Shanghaithe National Key Laboratory of Intelligent Technology and Systems of China.
文摘Let the function n: R^R→R be defined by Recently, approximating continuous function of several variables by the composition and superposition 9oπ of the function π and a function of one variable g is arousing great attention due to its applications in artificial neural networks (cf. references 1—3).
文摘In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficient is chosen in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved.