In this paper, we propose a globally convergent Polak-Ribiere-Polyak (PRP) conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and ...In this paper, we propose a globally convergent Polak-Ribiere-Polyak (PRP) conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and less demanding than those defined in [4,10]. A favorite property of this method is that we can choose the initial stepsize as the one-dimensional minimizer of a quadratic modelΦ(t):= f(xk)+tgkTdk+(1/2) t2dkTQkdk, where Qk is a positive definite matrix that carries some second order information of the objective function f. So, this line search may make the stepsize tk more easily accepted. Preliminary numerical results show that this method is efficient.展开更多
Without assuming the boundedness, strict monotonicity and differentiability of the activation functions, the authors utilize the Lyapunov functional method to analyze the global convergence of some delayed models. For...Without assuming the boundedness, strict monotonicity and differentiability of the activation functions, the authors utilize the Lyapunov functional method to analyze the global convergence of some delayed models. For the Hopfield neural network with time delays, a new sufficient condition ensuring the existence, uniqueness and global exponential stability of the equilibrium point is derived. This criterion concerning the signs of entries in the connection matrix imposes constraints on the feedback matrix independently of the delay parameters. From a new viewpoint, the bidirectional associative memory neural network with time delays is investigated and a new global exponential stability result is given.展开更多
In this paper, a trust region method for equality constrained optimizationbased on nondifferentiable exact penalty is proposed. In this algorithm, the trail step ischaracterized by computation of its normal component ...In this paper, a trust region method for equality constrained optimizationbased on nondifferentiable exact penalty is proposed. In this algorithm, the trail step ischaracterized by computation of its normal component being separated from computation of itstangential component, i.e., only the tangential component of the trail step is constrained by trustradius while the normal component and trail step itself have no constraints. The other maincharacteristic of the algorithm is the decision of trust region radius. Here, the decision of trustregion radius uses the information of the gradient of objective function and reduced Hessian.However, Maratos effect will occur when we use the nondifferentiable exact penalty function as themerit function. In order to obtain the superlinear convergence of the algorithm, we use the twiceorder correction technique. Because of the speciality of the adaptive trust region method, we usetwice order correction when p = 0 (the definition is as in Section 2) and this is different from thetraditional trust region methods for equality constrained optimization. So the computation of thealgorithm in this paper is reduced. What is more, we can prove that the algorithm is globally andsuperlinearly convergent.展开更多
A robust SQP method, which is analogous to Facchinei’s algorithm, is introduced. The algorithm is globally convergent. It uses automatic rules for choosing penalty parameter, and can efficiently cope with the possibl...A robust SQP method, which is analogous to Facchinei’s algorithm, is introduced. The algorithm is globally convergent. It uses automatic rules for choosing penalty parameter, and can efficiently cope with the possible inconsistency of the quadratic search subproblem. In addition, the algorithm employs a differentiable approximate exact penalty function as a merit function. Unlike the merit function in Facchinei’s algorithm, which is quite complicated and is not easy to be implemented in practice, this new merit function is very simple. As a result, we can use the Facchinei’s idea to construct an algorithm which is easy to be implemented in practice.展开更多
Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. The two Armijo-type line searches are shown to guarantee the global convergence of the DY method for the unconstrained...Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. The two Armijo-type line searches are shown to guarantee the global convergence of the DY method for the unconstrained minimization of nonconvex differentiable functions. Further, if the function is strictly convex, the two Armijo-type line searches and another Armijo-type line search are also shown to guarantee the convergence of the DY method.展开更多
In this paper, the authors propose a class of Dai-Yuan (abbr. DY) conjugate gradient methods with linesearch in the presence of perturbations on general function and uniformly convex function respectively. Their ite...In this paper, the authors propose a class of Dai-Yuan (abbr. DY) conjugate gradient methods with linesearch in the presence of perturbations on general function and uniformly convex function respectively. Their iterate formula is xk+1 = xk + αk(sk + ωk), where the main direction sk is obtained by DY conjugate gradient method, ωk is perturbation term, and stepsize αk is determined by linesearch which does not tend to zero in the limit necessarily. The authors prove the global convergence of these methods under mild conditions. Preliminary computational experience is also reported.展开更多
For unconstrained optimization, a new hybrid projection algorithm is presented m the paper. This algorithm has some attractive convergence properties. Convergence theory can be obtained under the condition that Δ↓f...For unconstrained optimization, a new hybrid projection algorithm is presented m the paper. This algorithm has some attractive convergence properties. Convergence theory can be obtained under the condition that Δ↓f(x) is uniformly continuous. If Δ↓f(x) is continuously differentiable pseudo-convex, the whole sequence of iterates converges to a solution of the problem without any other assumptions. Furthermore, under appropriate conditions one shows that the sequence of iterates has a cluster-point if and only if Ω* ≠ θ. Numerical examples are given at the end of this paper.展开更多
基金This work is supported by the Chinese NSF grants 60475042 Guangxi NSF grants 0542043the Foundation of Advanced Research Center of Zhongshan University and Hong Kong
文摘In this paper, we propose a globally convergent Polak-Ribiere-Polyak (PRP) conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and less demanding than those defined in [4,10]. A favorite property of this method is that we can choose the initial stepsize as the one-dimensional minimizer of a quadratic modelΦ(t):= f(xk)+tgkTdk+(1/2) t2dkTQkdk, where Qk is a positive definite matrix that carries some second order information of the objective function f. So, this line search may make the stepsize tk more easily accepted. Preliminary numerical results show that this method is efficient.
基金Project supported by the National Natural Science Foundation of China (No.69982003, No.60074005).
文摘Without assuming the boundedness, strict monotonicity and differentiability of the activation functions, the authors utilize the Lyapunov functional method to analyze the global convergence of some delayed models. For the Hopfield neural network with time delays, a new sufficient condition ensuring the existence, uniqueness and global exponential stability of the equilibrium point is derived. This criterion concerning the signs of entries in the connection matrix imposes constraints on the feedback matrix independently of the delay parameters. From a new viewpoint, the bidirectional associative memory neural network with time delays is investigated and a new global exponential stability result is given.
基金This research is supported in part by the National Natural Science Foundation of China(Grant No. 39830070,10171055)and China Postdoctoral Science Foundation
文摘In this paper, a trust region method for equality constrained optimizationbased on nondifferentiable exact penalty is proposed. In this algorithm, the trail step ischaracterized by computation of its normal component being separated from computation of itstangential component, i.e., only the tangential component of the trail step is constrained by trustradius while the normal component and trail step itself have no constraints. The other maincharacteristic of the algorithm is the decision of trust region radius. Here, the decision of trustregion radius uses the information of the gradient of objective function and reduced Hessian.However, Maratos effect will occur when we use the nondifferentiable exact penalty function as themerit function. In order to obtain the superlinear convergence of the algorithm, we use the twiceorder correction technique. Because of the speciality of the adaptive trust region method, we usetwice order correction when p = 0 (the definition is as in Section 2) and this is different from thetraditional trust region methods for equality constrained optimization. So the computation of thealgorithm in this paper is reduced. What is more, we can prove that the algorithm is globally andsuperlinearly convergent.
基金This research is supportedin part by the National Natural Science Foundation ofChina(Grant No. 39830070).
文摘A robust SQP method, which is analogous to Facchinei’s algorithm, is introduced. The algorithm is globally convergent. It uses automatic rules for choosing penalty parameter, and can efficiently cope with the possible inconsistency of the quadratic search subproblem. In addition, the algorithm employs a differentiable approximate exact penalty function as a merit function. Unlike the merit function in Facchinei’s algorithm, which is quite complicated and is not easy to be implemented in practice, this new merit function is very simple. As a result, we can use the Facchinei’s idea to construct an algorithm which is easy to be implemented in practice.
文摘Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. The two Armijo-type line searches are shown to guarantee the global convergence of the DY method for the unconstrained minimization of nonconvex differentiable functions. Further, if the function is strictly convex, the two Armijo-type line searches and another Armijo-type line search are also shown to guarantee the convergence of the DY method.
基金The work is supported by the National Natural Science Foundation of China under Grant No.10571106.
文摘In this paper, the authors propose a class of Dai-Yuan (abbr. DY) conjugate gradient methods with linesearch in the presence of perturbations on general function and uniformly convex function respectively. Their iterate formula is xk+1 = xk + αk(sk + ωk), where the main direction sk is obtained by DY conjugate gradient method, ωk is perturbation term, and stepsize αk is determined by linesearch which does not tend to zero in the limit necessarily. The authors prove the global convergence of these methods under mild conditions. Preliminary computational experience is also reported.
基金This work is supported by National Natural Science Foundation under Grants No. 10571106 and 10471159.
文摘For unconstrained optimization, a new hybrid projection algorithm is presented m the paper. This algorithm has some attractive convergence properties. Convergence theory can be obtained under the condition that Δ↓f(x) is uniformly continuous. If Δ↓f(x) is continuously differentiable pseudo-convex, the whole sequence of iterates converges to a solution of the problem without any other assumptions. Furthermore, under appropriate conditions one shows that the sequence of iterates has a cluster-point if and only if Ω* ≠ θ. Numerical examples are given at the end of this paper.