期刊文献+
共找到1,337篇文章
< 1 2 67 >
每页显示 20 50 100
A SELF-ADAPTIVE TRUST REGION ALGORITHM 被引量:30
1
作者 Long Hei (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy ofMathematics and Systems Sciences, Chinese Academy of Sciences, Beijing 100080, China)(Department of Industrial Engineering and Management Sciences Northwestern University C2SO,2145 Sheridan Road Evanston, Illinois 60208, USA) 《Journal of Computational Mathematics》 SCIE CSCD 2003年第2期229-236,共8页
In this paper we propose a self-adaptive trust region algorithm. The trust region radius is updated at a variable rate according to the ratio between the actual reduction and the predicted reduction of the objective f... In this paper we propose a self-adaptive trust region algorithm. The trust region radius is updated at a variable rate according to the ratio between the actual reduction and the predicted reduction of the objective function, rather than by simply enlarging or reducing the original trust region radius at a constant rate. We show that this new algorithm preserves the strong convergence property of traditional trust region methods. Numerical results are also presented. 展开更多
关键词 Trust region unconstrained optimization Nonlinear optimization.
原文传递
一个自动确定信赖域半径的信赖域方法 被引量:28
2
作者 李改弟 《工程数学学报》 CSCD 北大核心 2006年第5期843-848,共6页
本文对无约束优化问题提出一个自适应的信赖域方法,每次迭代都充分利用当前迭代点包含的二次信息自动产生一个信赖域半径,所用的计算信赖域半径的策略没有增加额外的计算量。在通常条件下,证明了全局收敛性及局部超线性收敛结果,数值结... 本文对无约束优化问题提出一个自适应的信赖域方法,每次迭代都充分利用当前迭代点包含的二次信息自动产生一个信赖域半径,所用的计算信赖域半径的策略没有增加额外的计算量。在通常条件下,证明了全局收敛性及局部超线性收敛结果,数值结果验证了新方法的有效性。 展开更多
关键词 无约束 信赖域方法 自适应 全局收敛
下载PDF
CURVILINEAR PATHS AND TRUST REGION METHODS WITH NONMONOTONIC BACK TRACKING TECHNIQUE FOR UNCONSTRAINED OPTIMIZATION 被引量:26
3
作者 De-tong Zhu (Department of Mathematics, Shanghai Normal University, Shanghai 200234, China) 《Journal of Computational Mathematics》 SCIE EI CSCD 2001年第3期241-258,共18页
Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which ... Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which combines line search technique with an approximate trust region algorithm; Information on the convergence analysis; Details on the numerical experiments. 展开更多
关键词 curvilinear paths trust region methods nonmonotonic technique unconstrained optimization
原文传递
A NONMONOTONE CONJUGATE GRADIENT ALGORITHM FOR UNCONSTRAINED OPTIMIZATION 被引量:28
4
《Journal of Systems Science & Complexity》 SCIE EI CSCD 2002年第2期139-145,共7页
Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of n... Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of nonmonotone line search is used. Under mildassumptions, we prove the global convergence of the method. Some numerical results arealso presented. 展开更多
关键词 unconstrained optimization conjugate gradient nonmonotone line search global convergence.
原文传递
NON-QUASI-NEWTON UPDATES FOR UNCONSTRAINED OPTIMIZATION 被引量:25
5
作者 YUAN, YX BYRD, RH 《Journal of Computational Mathematics》 SCIE CSCD 1995年第2期95-107,共13页
In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering diffe... In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering different techniques of approximating the objective function. Theoretical analyses are given to show the advantages of using non-quasi-Newton updates. Under mild conditions we prove that our new update formulae preserve global convergence properties. Numerical results are also presented. 展开更多
关键词 NON-QUASI-NEWTON UPDATES FOR unconstrained OPTIMIZATION TH
原文传递
An adaptive trust region method and its convergence 被引量:10
6
作者 章祥荪 张菊亮 廖立志 《Science China Mathematics》 SCIE 2002年第5期620-631,共12页
In this paper, a new trust region subproblem is proposed. The trust radius in the new subproblem adjusts itself adaptively. As a result, an adaptive trust region method is constructed based on the new trust region sub... In this paper, a new trust region subproblem is proposed. The trust radius in the new subproblem adjusts itself adaptively. As a result, an adaptive trust region method is constructed based on the new trust region subproblem. The local and global convergence results of the adaptive trust region method are proved.Numerical results indicate that the new method is very efficient. 展开更多
关键词 TRUST REGION method unconstrained optimization GLOBAL convergence SUPERLINEAR convergence.
原文传递
Further insight into the convergence of the Fletcher-Reeves method 被引量:16
7
作者 戴彧虹 《Science China Mathematics》 SCIE 1999年第9期905-916,共12页
The convergence properties of the Fletcher-Reeves method for unconstrained optimization are further studied with the technique of generalized line search. Two conditions are given which guarantee the global convergenc... The convergence properties of the Fletcher-Reeves method for unconstrained optimization are further studied with the technique of generalized line search. Two conditions are given which guarantee the global convergence of the Fletcher-Reeves method using generalized Wolfe line searches or generalized Arjimo line searches, whereas an example is constructed showing that the conditions cannot be relaxed in certain senses. 展开更多
关键词 unconstrained optimization conjugate gradient Fletcher-Reeves method generalid line search global convergence
原文传递
TESTING DIFFERENT CONJUGATE GRADIENT METHODS FOR LARGE-SCALE UNCONSTRAINED OPTIMIZATION 被引量:10
8
作者 Yu-hongDai QinNi 《Journal of Computational Mathematics》 SCIE CSCD 2003年第3期311-320,共10页
In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and th... In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and the second five hybrid CG methods. A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems, CUTE. The conjugate gradient methods are ranked according to the numerical results. Some remarks are given. 展开更多
关键词 Conjugate gradient methods LARGE-SCALE unconstrained optimization Numerical tests.
原文传递
A NEW STEPSIZE FOR THE STEEPEST DESCENT METHOD 被引量:15
9
作者 Ya-xiang Yuan 《Journal of Computational Mathematics》 SCIE EI CSCD 2006年第2期149-156,共8页
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by ... The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems. 展开更多
关键词 Steepest descent Line search unconstrained optimization Convergence.
原文传递
Conjugate Gradient Methods with Armijo-type Line Searches 被引量:12
10
作者 Yu-Hong DAIState Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics, Academy of Mathematics and System Sciences, Chinese Academy of Sciences, Beijing 100080, China 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2002年第1期123-130,共8页
Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. Under these line searches, global convergence results are established for several famous conjugate gradient methods, i... Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. Under these line searches, global convergence results are established for several famous conjugate gradient methods, including the Fletcher-Reeves method, the Polak-Ribiere-Polyak method, and the conjugate descent method. 展开更多
关键词 unconstrained optimization conjugate gradient method line search global convergence
全文增补中
A NEW DERIVATIVE FREE OPTIMIZATION METHOD BASED ON CONIC INTERPOLATION MODEL 被引量:9
11
作者 倪勤 胡书华 《Acta Mathematica Scientia》 SCIE CSCD 2004年第2期281-290,共10页
In this paper, a new derivative free trust region method is developed based on the conic interpolation model for the unconstrained optimization. The conic interpolation model is built by means of the quadratic model f... In this paper, a new derivative free trust region method is developed based on the conic interpolation model for the unconstrained optimization. The conic interpolation model is built by means of the quadratic model function, the collinear scaling formula, quadratic approximation and interpolation. All the parameters in this model are determined by objective function interpolation condition. A new derivative free method is developed based upon this model and the global convergence of this new method is proved without any information on gradient. 展开更多
关键词 Derivative free optimization method conic interpolation model quadratic interpolation model trust region method unconstrained optimization
下载PDF
Nonmonotone Adaptive Trust Region Algorithms with Indefinite Dogleg Path for Unconstrained Minimization 被引量:13
12
作者 陈俊 孙文瑜 《Northeastern Mathematical Journal》 CSCD 2008年第1期19-30,共12页
In this paper, we combine the nonmonotone and adaptive techniques with trust region method for unconstrained minimization problems. We set a new ratio of the actual descent and predicted descent. Then, instead of the ... In this paper, we combine the nonmonotone and adaptive techniques with trust region method for unconstrained minimization problems. We set a new ratio of the actual descent and predicted descent. Then, instead of the monotone sequence, the nonmonotone sequence of function values are employed. With the adaptive technique, the radius of trust region △k can be adjusted automatically to improve the efficiency of trust region methods. By means of the Bunch-Parlett factorization, we construct a method with indefinite dogleg path for solving the trust region subproblem which can handle the indefinite approximate Hessian Bk. The convergence properties of the algorithm are established. Finally, detailed numerical results are reported to show that our algorithm is efficient. 展开更多
关键词 nonmonotone trust region method adaptive method indefinite dogleg path unconstrained minimization global convergence superlinear convergence
下载PDF
An unconstrained optimization method using nonmonotone second order Goldstein's line search 被引量:12
13
作者 Wen-yu SUN~(1+) Qun-yan ZHOU~(1,2) ~1 School of Mathematics and Computer Science,Nanjing Normal University,Nanjing 210097,China ~2 Department of Basic Courses,Jiangsu Teachers University of Technology,Changzhou 213001,China 《Science China Mathematics》 SCIE 2007年第10期1389-1400,共12页
In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed. By using the negative curvature information from the Hessian,the sequence generated is sh... In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed. By using the negative curvature information from the Hessian,the sequence generated is shown to converge to a stationary point with the second order optimality conditions. Numerical tests on a set of standard test problems confirm the efficiency of our new method. 展开更多
关键词 NONMONOTONE method direction of negative curvature line search DESCENT pair unconstrained optimization
原文传递
GLOBAL CONVERGENCE PROPERTIES OF THREE-TERM CONJUGATE GRADIENT METHOD WITH NEW-TYPE LINE SEARCH 被引量:13
14
作者 WANGChangyu DUShouqiang CHENYuanyuan 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2004年第3期412-420,共9页
In this paper, a new Wolfe-type line search and a new Armijo-type line searchare proposed, and some global convergence properties of a three-term conjugate gradient method withthe two line searches are proved.
关键词 unconstrained optimization line search three-term conjugate gradientmethod global convergence
原文传递
Global Convergence of a Modified PRP Conjugate Gradient Method 被引量:10
15
作者 Hai Dong HUANG Yan Jun LI Zeng Xin WEI 《Journal of Mathematical Research and Exposition》 CSCD 2010年第1期141-148,共8页
In this paper, a modified formula for βk^PRP is proposed for the conjugate gradient method of solving unconstrained optimization problems. The value of βk^PRP keeps nonnegative independent of the line search. Under ... In this paper, a modified formula for βk^PRP is proposed for the conjugate gradient method of solving unconstrained optimization problems. The value of βk^PRP keeps nonnegative independent of the line search. Under mild conditions, the global convergence of modified PRP method with the strong Wolfe-Powell line search is established. Preliminary numerical results show that the modified method is efficient. 展开更多
关键词 unconstrained optimization conjugate gradient method global convergence.
下载PDF
A class of globally convergent conjugate gradient methods 被引量:6
16
作者 戴彧虹 袁亚湘 《Science China Mathematics》 SCIE 2003年第2期251-261,共11页
Conjugate gradient methods are very important ones for solving nonlinear optimization problems,especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods wereusually analyzed... Conjugate gradient methods are very important ones for solving nonlinear optimization problems,especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods wereusually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can beregarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed byDai et al. To analyze this class of methods, we introduce some unified tools that concern a general methodwith the scalarβk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods canuniformly be analyzed. 展开更多
关键词 unconstrained optimization CONJUGATE gradient LINE search global convergence.
原文传递
Shamanskii-Like Levenberg-Marquardt Method with a New Line Search for Systems of Nonlinear Equations 被引量:10
17
作者 CHEN Liang MA Yanfang 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2020年第5期1694-1707,共14页
To save the calculations of Jacobian,a multi-step Levenberg-Marquardt method named Shamanskii-like LM method for systems of nonlinear equations was proposed by Fa.Its convergence properties have been proved by using a... To save the calculations of Jacobian,a multi-step Levenberg-Marquardt method named Shamanskii-like LM method for systems of nonlinear equations was proposed by Fa.Its convergence properties have been proved by using a trust region technique under the local error bound condition.However,the authors wonder whether the similar convergence properties are still true with standard line searches since the direction may not be a descent direction.For this purpose,the authors present a new nonmonotone m-th order Armijo type line search to guarantee the global convergence.Under the same condition as trust region case,the convergence rate also has been shown to be m+1 by using this line search technique.Numerical experiments show the new algorithm can save much running time for the large scale problems,so it is efficient and promising. 展开更多
关键词 Armijo line search Levenberg-Marquardt method local error bound condition systems of nonlinear equations unconstrained optimization
原文传递
AN ADAPTIVE NONMONOTONIC TRUST REGION METHOD WITH CURVILINEAR SEARCHES 被引量:7
18
作者 Qun-yan Zhou Wen-yu Sun 《Journal of Computational Mathematics》 SCIE CSCD 2006年第6期761-770,共10页
In this paper, an algorithm for unconstrained optimization that employs both trust region techniques and curvilinear searches is proposed. At every iteration, we solve the trust region subproblem whose radius is gener... In this paper, an algorithm for unconstrained optimization that employs both trust region techniques and curvilinear searches is proposed. At every iteration, we solve the trust region subproblem whose radius is generated adaptively only once. Nonmonotonic backtracking curvilinear searches are performed when the solution of the subproblem is unacceptable. The global convergence and fast local convergence rate of the proposed algorithms are established under some reasonable conditions. The results of numerical 'experiments are reported to show the effectiveness of the proposed algorithms. 展开更多
关键词 unconstrained optimization Preconditioned gradient path Trust region method Curvilinear search.
原文传递
Global Convergence of a Modified Spectral CD Conjugate Gradient Method 被引量:7
19
作者 Wei CAO Kai Rong WANG Yi Li WANG 《Journal of Mathematical Research and Exposition》 CSCD 2011年第2期261-268,共8页
In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the ... In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the objective function,and this property depends neither on the line search rule,nor on the convexity of the objective function.Moreover,the modified method reduces to the standard CD method if line search is exact.Under some mild conditions,we prove that the modified method with line search is globally convergent even if the objective function is nonconvex.Preliminary numerical results show that the proposed method is very promising. 展开更多
关键词 unconstrained optimization conjugate gradient method armijo-type line search global convergence
下载PDF
A New Subdivision Algorithm for the Bernstein Polynomial Approach to Global Optimization 被引量:6
20
作者 P.S.V.Nataraj M.Arounassalame 《International Journal of Automation and computing》 EI 2007年第4期342-352,共11页
In this paper, an improved algorithm is proposed for unconstrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems. The proposed algorithm is based on the Bernstein poly... In this paper, an improved algorithm is proposed for unconstrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems. The proposed algorithm is based on the Bernstein polynomial approach. Novel features of the proposed algorithm are that it uses a new rule for the selection of the subdivision point, modified rules for the selection of the subdivision direction, and a new acceleration device to avoid some unnecessary subdivisions. The performance of the proposed algorithm is numerically tested on a collection of 16 test problems. The results of the tests show the proposed algorithm to be superior to the existing Bernstein algorithm in terms of the chosen performance metrics. 展开更多
关键词 Bernstein polynomials global optimization nonlinear optimization polynomial optimization unconstrained optimization.
下载PDF
上一页 1 2 67 下一页 到第
使用帮助 返回顶部