In this paper, we consider the convergence properties of a new class of three terms conjugate gradient methods with generalized Armijo step size rule for minimizing a continuously differentiable function f on R^π wit...In this paper, we consider the convergence properties of a new class of three terms conjugate gradient methods with generalized Armijo step size rule for minimizing a continuously differentiable function f on R^π without assuming that the sequence {xk} of iterates is bounded. We prove that the limit infimum of ‖↓△f(xk)‖ is Zero. Moreover, we prove that, when f(x) is pseudo-convex (quasi-convex) function, this new method has strong convergence results: either xk→x* and x* is a minimizer (stationary point); or ‖xk‖→∞, arg min{f(x) :x∈R^n} =φ, and.f(xk) ↓ inf(f(x) : x∈R^n}. Combining FR, PR, HS methods with our new method, FR, PR, HS methods are modified to have global convergence property.Numerical result show that the new algorithms are efficient by comparing with FR,PR, HS conjugate gradient methods with Armijo step size rule.展开更多
In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Comb...In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.展开更多
文摘In this paper, we consider the convergence properties of a new class of three terms conjugate gradient methods with generalized Armijo step size rule for minimizing a continuously differentiable function f on R^π without assuming that the sequence {xk} of iterates is bounded. We prove that the limit infimum of ‖↓△f(xk)‖ is Zero. Moreover, we prove that, when f(x) is pseudo-convex (quasi-convex) function, this new method has strong convergence results: either xk→x* and x* is a minimizer (stationary point); or ‖xk‖→∞, arg min{f(x) :x∈R^n} =φ, and.f(xk) ↓ inf(f(x) : x∈R^n}. Combining FR, PR, HS methods with our new method, FR, PR, HS methods are modified to have global convergence property.Numerical result show that the new algorithms are efficient by comparing with FR,PR, HS conjugate gradient methods with Armijo step size rule.
文摘In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.