In this paper we propose a self-adaptive trust region algorithm. The trust region radius is updated at a variable rate according to the ratio between the actual reduction and the predicted reduction of the objective f...In this paper we propose a self-adaptive trust region algorithm. The trust region radius is updated at a variable rate according to the ratio between the actual reduction and the predicted reduction of the objective function, rather than by simply enlarging or reducing the original trust region radius at a constant rate. We show that this new algorithm preserves the strong convergence property of traditional trust region methods. Numerical results are also presented.展开更多
Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which ...Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which combines line search technique with an approximate trust region algorithm; Information on the convergence analysis; Details on the numerical experiments.展开更多
Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of n...Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of nonmonotone line search is used. Under mildassumptions, we prove the global convergence of the method. Some numerical results arealso presented.展开更多
In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering diffe...In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering different techniques of approximating the objective function. Theoretical analyses are given to show the advantages of using non-quasi-Newton updates. Under mild conditions we prove that our new update formulae preserve global convergence properties. Numerical results are also presented.展开更多
In this paper, a new trust region subproblem is proposed. The trust radius in the new subproblem adjusts itself adaptively. As a result, an adaptive trust region method is constructed based on the new trust region sub...In this paper, a new trust region subproblem is proposed. The trust radius in the new subproblem adjusts itself adaptively. As a result, an adaptive trust region method is constructed based on the new trust region subproblem. The local and global convergence results of the adaptive trust region method are proved.Numerical results indicate that the new method is very efficient.展开更多
The convergence properties of the Fletcher-Reeves method for unconstrained optimization are further studied with the technique of generalized line search. Two conditions are given which guarantee the global convergenc...The convergence properties of the Fletcher-Reeves method for unconstrained optimization are further studied with the technique of generalized line search. Two conditions are given which guarantee the global convergence of the Fletcher-Reeves method using generalized Wolfe line searches or generalized Arjimo line searches, whereas an example is constructed showing that the conditions cannot be relaxed in certain senses.展开更多
In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and th...In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and the second five hybrid CG methods. A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems, CUTE. The conjugate gradient methods are ranked according to the numerical results. Some remarks are given.展开更多
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by ...The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.展开更多
Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. Under these line searches, global convergence results are established for several famous conjugate gradient methods, i...Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. Under these line searches, global convergence results are established for several famous conjugate gradient methods, including the Fletcher-Reeves method, the Polak-Ribiere-Polyak method, and the conjugate descent method.展开更多
In this paper, a new derivative free trust region method is developed based on the conic interpolation model for the unconstrained optimization. The conic interpolation model is built by means of the quadratic model f...In this paper, a new derivative free trust region method is developed based on the conic interpolation model for the unconstrained optimization. The conic interpolation model is built by means of the quadratic model function, the collinear scaling formula, quadratic approximation and interpolation. All the parameters in this model are determined by objective function interpolation condition. A new derivative free method is developed based upon this model and the global convergence of this new method is proved without any information on gradient.展开更多
In this paper, we combine the nonmonotone and adaptive techniques with trust region method for unconstrained minimization problems. We set a new ratio of the actual descent and predicted descent. Then, instead of the ...In this paper, we combine the nonmonotone and adaptive techniques with trust region method for unconstrained minimization problems. We set a new ratio of the actual descent and predicted descent. Then, instead of the monotone sequence, the nonmonotone sequence of function values are employed. With the adaptive technique, the radius of trust region △k can be adjusted automatically to improve the efficiency of trust region methods. By means of the Bunch-Parlett factorization, we construct a method with indefinite dogleg path for solving the trust region subproblem which can handle the indefinite approximate Hessian Bk. The convergence properties of the algorithm are established. Finally, detailed numerical results are reported to show that our algorithm is efficient.展开更多
In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed. By using the negative curvature information from the Hessian,the sequence generated is sh...In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed. By using the negative curvature information from the Hessian,the sequence generated is shown to converge to a stationary point with the second order optimality conditions. Numerical tests on a set of standard test problems confirm the efficiency of our new method.展开更多
In this paper, a new Wolfe-type line search and a new Armijo-type line searchare proposed, and some global convergence properties of a three-term conjugate gradient method withthe two line searches are proved.
In this paper, a modified formula for βk^PRP is proposed for the conjugate gradient method of solving unconstrained optimization problems. The value of βk^PRP keeps nonnegative independent of the line search. Under ...In this paper, a modified formula for βk^PRP is proposed for the conjugate gradient method of solving unconstrained optimization problems. The value of βk^PRP keeps nonnegative independent of the line search. Under mild conditions, the global convergence of modified PRP method with the strong Wolfe-Powell line search is established. Preliminary numerical results show that the modified method is efficient.展开更多
Conjugate gradient methods are very important ones for solving nonlinear optimization problems,especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods wereusually analyzed...Conjugate gradient methods are very important ones for solving nonlinear optimization problems,especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods wereusually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can beregarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed byDai et al. To analyze this class of methods, we introduce some unified tools that concern a general methodwith the scalarβk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods canuniformly be analyzed.展开更多
To save the calculations of Jacobian,a multi-step Levenberg-Marquardt method named Shamanskii-like LM method for systems of nonlinear equations was proposed by Fa.Its convergence properties have been proved by using a...To save the calculations of Jacobian,a multi-step Levenberg-Marquardt method named Shamanskii-like LM method for systems of nonlinear equations was proposed by Fa.Its convergence properties have been proved by using a trust region technique under the local error bound condition.However,the authors wonder whether the similar convergence properties are still true with standard line searches since the direction may not be a descent direction.For this purpose,the authors present a new nonmonotone m-th order Armijo type line search to guarantee the global convergence.Under the same condition as trust region case,the convergence rate also has been shown to be m+1 by using this line search technique.Numerical experiments show the new algorithm can save much running time for the large scale problems,so it is efficient and promising.展开更多
In this paper, an algorithm for unconstrained optimization that employs both trust region techniques and curvilinear searches is proposed. At every iteration, we solve the trust region subproblem whose radius is gener...In this paper, an algorithm for unconstrained optimization that employs both trust region techniques and curvilinear searches is proposed. At every iteration, we solve the trust region subproblem whose radius is generated adaptively only once. Nonmonotonic backtracking curvilinear searches are performed when the solution of the subproblem is unacceptable. The global convergence and fast local convergence rate of the proposed algorithms are established under some reasonable conditions. The results of numerical 'experiments are reported to show the effectiveness of the proposed algorithms.展开更多
In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the ...In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the objective function,and this property depends neither on the line search rule,nor on the convexity of the objective function.Moreover,the modified method reduces to the standard CD method if line search is exact.Under some mild conditions,we prove that the modified method with line search is globally convergent even if the objective function is nonconvex.Preliminary numerical results show that the proposed method is very promising.展开更多
In this paper, an improved algorithm is proposed for unconstrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems. The proposed algorithm is based on the Bernstein poly...In this paper, an improved algorithm is proposed for unconstrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems. The proposed algorithm is based on the Bernstein polynomial approach. Novel features of the proposed algorithm are that it uses a new rule for the selection of the subdivision point, modified rules for the selection of the subdivision direction, and a new acceleration device to avoid some unnecessary subdivisions. The performance of the proposed algorithm is numerically tested on a collection of 16 test problems. The results of the tests show the proposed algorithm to be superior to the existing Bernstein algorithm in terms of the chosen performance metrics.展开更多
文摘In this paper we propose a self-adaptive trust region algorithm. The trust region radius is updated at a variable rate according to the ratio between the actual reduction and the predicted reduction of the objective function, rather than by simply enlarging or reducing the original trust region radius at a constant rate. We show that this new algorithm preserves the strong convergence property of traditional trust region methods. Numerical results are also presented.
基金the Chinese National Science Foundation Grant 10071050, the Science andTechnology Foundation of Shanghai Higher Education.
文摘Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which combines line search technique with an approximate trust region algorithm; Information on the convergence analysis; Details on the numerical experiments.
基金the National Natural Science Foundation of China(19801033,10171104).
文摘Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of nonmonotone line search is used. Under mildassumptions, we prove the global convergence of the method. Some numerical results arealso presented.
文摘In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering different techniques of approximating the objective function. Theoretical analyses are given to show the advantages of using non-quasi-Newton updates. Under mild conditions we prove that our new update formulae preserve global convergence properties. Numerical results are also presented.
基金The authors would like to thank Prof Y.-X. Yuan for providing the source programsfor ref. [16]. Zhang Xiangsun was supported by the National Natural Science Foundation of China (Grant No. 39830070) Hong Kong Baptist University Zhang Juliang was su
文摘In this paper, a new trust region subproblem is proposed. The trust radius in the new subproblem adjusts itself adaptively. As a result, an adaptive trust region method is constructed based on the new trust region subproblem. The local and global convergence results of the adaptive trust region method are proved.Numerical results indicate that the new method is very efficient.
基金Project supported by the National Natural Science Foundation of China (Grant No. 19801033).
文摘The convergence properties of the Fletcher-Reeves method for unconstrained optimization are further studied with the technique of generalized line search. Two conditions are given which guarantee the global convergence of the Fletcher-Reeves method using generalized Wolfe line searches or generalized Arjimo line searches, whereas an example is constructed showing that the conditions cannot be relaxed in certain senses.
基金Research partially supported by Chinese NSF grants 19801033,19771047 and 10171104
文摘In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and the second five hybrid CG methods. A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems, CUTE. The conjugate gradient methods are ranked according to the numerical results. Some remarks are given.
文摘The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.
基金Supported by the National Natural Science Foundation of China (No.19801033 and 10171104).
文摘Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. Under these line searches, global convergence results are established for several famous conjugate gradient methods, including the Fletcher-Reeves method, the Polak-Ribiere-Polyak method, and the conjugate descent method.
基金This work was supported by the National Natural Science Foundation of China(10071037)
文摘In this paper, a new derivative free trust region method is developed based on the conic interpolation model for the unconstrained optimization. The conic interpolation model is built by means of the quadratic model function, the collinear scaling formula, quadratic approximation and interpolation. All the parameters in this model are determined by objective function interpolation condition. A new derivative free method is developed based upon this model and the global convergence of this new method is proved without any information on gradient.
基金Supported by the NNSF(10231060 and 10501024)of Chinathe Specialized Research Fund(20040319003)of Doctoral Program of Higher Education of China+1 种基金the Natural Science Grant(BK2006214)of Jiangsu Province of Chinathe Foundation(2004NXY20)of Nanjing Xiaozhuang College.
文摘In this paper, we combine the nonmonotone and adaptive techniques with trust region method for unconstrained minimization problems. We set a new ratio of the actual descent and predicted descent. Then, instead of the monotone sequence, the nonmonotone sequence of function values are employed. With the adaptive technique, the radius of trust region △k can be adjusted automatically to improve the efficiency of trust region methods. By means of the Bunch-Parlett factorization, we construct a method with indefinite dogleg path for solving the trust region subproblem which can handle the indefinite approximate Hessian Bk. The convergence properties of the algorithm are established. Finally, detailed numerical results are reported to show that our algorithm is efficient.
基金This work was supported by the National Natural Science Foundation of China(Grant No.10231060)the Specialized Research Fund of Doctoral Program of Higher Education of China(Grant No.20040319003)
文摘In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed. By using the negative curvature information from the Hessian,the sequence generated is shown to converge to a stationary point with the second order optimality conditions. Numerical tests on a set of standard test problems confirm the efficiency of our new method.
基金This research is supported by the National Natural Science Foundation of China(10171055).
文摘In this paper, a new Wolfe-type line search and a new Armijo-type line searchare proposed, and some global convergence properties of a three-term conjugate gradient method withthe two line searches are proved.
基金Supported by the National Natural Science Foundation of China (Grant No.10761001)
文摘In this paper, a modified formula for βk^PRP is proposed for the conjugate gradient method of solving unconstrained optimization problems. The value of βk^PRP keeps nonnegative independent of the line search. Under mild conditions, the global convergence of modified PRP method with the strong Wolfe-Powell line search is established. Preliminary numerical results show that the modified method is efficient.
基金This workwas partially supported by the National Natural Science Foundation of China (Grant Nos. 19525101, 19731010, 19801033 and 10171104), and also by an Innovation Fund of the Academy of Mathematics and System Sciences of CAS.
文摘Conjugate gradient methods are very important ones for solving nonlinear optimization problems,especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods wereusually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can beregarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed byDai et al. To analyze this class of methods, we introduce some unified tools that concern a general methodwith the scalarβk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods canuniformly be analyzed.
基金supported by the Natural Science Foundation of Anhui Province under Grant No.1708085MF159the Natural Science Foundation of the Anhui Higher Education Institutions under Grant Nos.KJ2017A375+1 种基金KJ2019A0604the abroad visiting of excellent young talents in universities of Anhui province under Grant No.GXGWFX2019022。
文摘To save the calculations of Jacobian,a multi-step Levenberg-Marquardt method named Shamanskii-like LM method for systems of nonlinear equations was proposed by Fa.Its convergence properties have been proved by using a trust region technique under the local error bound condition.However,the authors wonder whether the similar convergence properties are still true with standard line searches since the direction may not be a descent direction.For this purpose,the authors present a new nonmonotone m-th order Armijo type line search to guarantee the global convergence.Under the same condition as trust region case,the convergence rate also has been shown to be m+1 by using this line search technique.Numerical experiments show the new algorithm can save much running time for the large scale problems,so it is efficient and promising.
基金This work was supported by the National Natural Science Foundation of China (grant No. 10231060), the Specialized Research Fund of Doctoral Program of Higher Education of China at No,20040319003 and the Graduates' Creative Project of Jiangsu Province, China,
文摘In this paper, an algorithm for unconstrained optimization that employs both trust region techniques and curvilinear searches is proposed. At every iteration, we solve the trust region subproblem whose radius is generated adaptively only once. Nonmonotonic backtracking curvilinear searches are performed when the solution of the subproblem is unacceptable. The global convergence and fast local convergence rate of the proposed algorithms are established under some reasonable conditions. The results of numerical 'experiments are reported to show the effectiveness of the proposed algorithms.
基金Supported by the Key Project of 2010 Chongqing Higher Education Teaching Reform (Grant No. 102104)
文摘In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the objective function,and this property depends neither on the line search rule,nor on the convexity of the objective function.Moreover,the modified method reduces to the standard CD method if line search is exact.Under some mild conditions,we prove that the modified method with line search is globally convergent even if the objective function is nonconvex.Preliminary numerical results show that the proposed method is very promising.
文摘In this paper, an improved algorithm is proposed for unconstrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems. The proposed algorithm is based on the Bernstein polynomial approach. Novel features of the proposed algorithm are that it uses a new rule for the selection of the subdivision point, modified rules for the selection of the subdivision direction, and a new acceleration device to avoid some unnecessary subdivisions. The performance of the proposed algorithm is numerically tested on a collection of 16 test problems. The results of the tests show the proposed algorithm to be superior to the existing Bernstein algorithm in terms of the chosen performance metrics.