摘要
投影型神经网络具有自然保证解的可行性、可调参数少、搜索方向维数低和模型结构简单等优点,已引起众多学者关注.神经网络可用于求解优化问题的前提是它应具有全局收敛性.目前,该模型的这一性质仅对有界约束下严格凸二次规划问题得到了证明.该文利用常微分方程理论和LaSalle不变原理,通过构造Lyapunov函数,证明了该网络对一般凸规划问题的全局收敛性,并将约束区域推广到任一闭凸集.该文的结论奠定了该类网络的应用基础,扩大了它的应用范围.同时作者也讨论了该模型在较弱限制条件下的指数收敛性.最后给出一组实例,说明该网络计算上是可行和有效的.
Projection-type neural networks for optimization problems can naturally guarantee the feasibility of solutions, and they have advantages over other networks for their less parameters, low searching space dimension and simple structure. But only for strict convex quadratic optimization with bound constraints, their global convergence has been proved theoretically. In this paper, the global convergence of such networks for general convex programming problem is proved by means of ordinary differential equation theory and LaSalle invariance principal. At the same time, their exponential convergent speed is discussed. The obtained results settle the applicability of the networks. Several numerical examples are given to demonstrate the feasibility and efficiency of the networks.
出处
《计算机学报》
EI
CSCD
北大核心
2005年第7期1178-1184,共7页
Chinese Journal of Computers
基金
国家自然科学基金(10371097
60473034)资助~~
关键词
神经网络
凸规划
全局收敛
平衡点
投影算子
Boundary conditions
Neural networks
Numerical methods
Optimization
Ordinary differential equations