摘要
研究一种新的无约束优化超记忆梯度算法,算法在每步迭代中充分利用前面迭代点的信息产生下降方向,利用Wolfe线性搜索产生步长,在较弱的条件下证明了算法的全局收敛性。新算法在每步迭代中不需计算和存储矩阵,适于求解大规模优化问题。
The paper presents a new super- memory gradient method for unconstrained optimization problems. This method makes use of the current and previous multi - step iterafive information to generate a new search direction and uses Wolfe line search to define the step - size at each iteration. The global convergence is proved under some mild conditions. It is suitable to solve large scale unconstmfined optimization problems because it avoids the computation and storage of some matrices.
出处
《数学理论与应用》
2008年第4期1-5,共5页
Mathematical Theory and Applications
关键词
无约束优化
超记忆梯度法
Wolfe线性搜索
全局收敛
Unconstrained optimization Super- memory gradient method Wolfe line search Global convergence