Numerous intriguing optimization problems arise as a result of the advancement of machine learning.The stochastic first-ordermethod is the predominant choicefor those problems due to its high efficiency.However,the ne...Numerous intriguing optimization problems arise as a result of the advancement of machine learning.The stochastic first-ordermethod is the predominant choicefor those problems due to its high efficiency.However,the negative effects of noisy gradient estimates and high nonlinearity of the loss function result in a slow convergence rate.Second-order algorithms have their typical advantages in dealing with highly nonlinear and ill-conditioning problems.This paper provides a review on recent developments in stochastic variants of quasi-Newton methods,which construct the Hessian approximations using only gradient information.We concentrate on BFGS-based methods in stochastic settings and highlight the algorithmic improvements that enable the algorithm to work in various scenarios.Future research on stochastic quasi-Newton methods should focus on enhancing its applicability,lowering the computational and storage costs,and improving the convergence rate.展开更多
In this paper, we establish a class of sparse update algorithm based on matrix triangular factorizations for solving a system of sparse equations. The local Q-superlinear convergence of the algorithm is proved without...In this paper, we establish a class of sparse update algorithm based on matrix triangular factorizations for solving a system of sparse equations. The local Q-superlinear convergence of the algorithm is proved without introducing an m-step refactorization. We compare the numerical results of the new algorithm with those of the known algorithms, The comparison implies that the new algorithm is satisfactory.展开更多
This paper gives a class of descent methods for nonlinear least squares solution. A class of updating formulae is obtained by using generalized inverse matrices. These formulae generate an approximation to the second ...This paper gives a class of descent methods for nonlinear least squares solution. A class of updating formulae is obtained by using generalized inverse matrices. These formulae generate an approximation to the second part of the Hessian matrix of the objective function, and are updated in such a way that the resulting approximation to the whole Hessian matrix is the convex class of Broyden-like up-dating formulae. It is proved that the proposed updating formulae are invariant under linear transformation and that the class of factorized quasi-Newton methods are locally and superlinearly convergent. Numerical results are presented and show that the proposed methods are promising.展开更多
The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe ...The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).展开更多
This paper deals with discontinuous dual reciprocity boundary element method for solving an inverse source problem.The aim of this work is to determine the source term in elliptic equations for nonhomogenous anisotrop...This paper deals with discontinuous dual reciprocity boundary element method for solving an inverse source problem.The aim of this work is to determine the source term in elliptic equations for nonhomogenous anisotropic media,where some additional boundary measurements are required.An equivalent formulation to the primary inverse problem is established based on the minimization of a functional cost,where a regularization term is employed to eliminate the oscillations of the noisy data.Moreover,an efficient algorithm is presented and tested for some numerical examples.展开更多
基金the National Key R&D Program of China(No.2021YFA1000403)the National Natural Science Foundation of China(Nos.11731013,12101334 and U19B2040)+1 种基金the Natural Science Foundation of Tianjin(No.21JCQNJC00030)the Fundamental Research Funds for the Central Universities。
文摘Numerous intriguing optimization problems arise as a result of the advancement of machine learning.The stochastic first-ordermethod is the predominant choicefor those problems due to its high efficiency.However,the negative effects of noisy gradient estimates and high nonlinearity of the loss function result in a slow convergence rate.Second-order algorithms have their typical advantages in dealing with highly nonlinear and ill-conditioning problems.This paper provides a review on recent developments in stochastic variants of quasi-Newton methods,which construct the Hessian approximations using only gradient information.We concentrate on BFGS-based methods in stochastic settings and highlight the algorithmic improvements that enable the algorithm to work in various scenarios.Future research on stochastic quasi-Newton methods should focus on enhancing its applicability,lowering the computational and storage costs,and improving the convergence rate.
文摘In this paper, we establish a class of sparse update algorithm based on matrix triangular factorizations for solving a system of sparse equations. The local Q-superlinear convergence of the algorithm is proved without introducing an m-step refactorization. We compare the numerical results of the new algorithm with those of the known algorithms, The comparison implies that the new algorithm is satisfactory.
文摘This paper gives a class of descent methods for nonlinear least squares solution. A class of updating formulae is obtained by using generalized inverse matrices. These formulae generate an approximation to the second part of the Hessian matrix of the objective function, and are updated in such a way that the resulting approximation to the whole Hessian matrix is the convex class of Broyden-like up-dating formulae. It is proved that the proposed updating formulae are invariant under linear transformation and that the class of factorized quasi-Newton methods are locally and superlinearly convergent. Numerical results are presented and show that the proposed methods are promising.
文摘The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).
文摘This paper deals with discontinuous dual reciprocity boundary element method for solving an inverse source problem.The aim of this work is to determine the source term in elliptic equations for nonhomogenous anisotropic media,where some additional boundary measurements are required.An equivalent formulation to the primary inverse problem is established based on the minimization of a functional cost,where a regularization term is employed to eliminate the oscillations of the noisy data.Moreover,an efficient algorithm is presented and tested for some numerical examples.