The refined Arnoldi method proposed by Jia is used for computing some eigen-pairs of large matrices. In contrast to the Arnoldi method, the fundamental dif-ference is that the refined method seeks certain refined Ritz...The refined Arnoldi method proposed by Jia is used for computing some eigen-pairs of large matrices. In contrast to the Arnoldi method, the fundamental dif-ference is that the refined method seeks certain refined Ritz vectors, which aredifferent from the Ritz vectors obtained by the Arnoldi method, from a projection space with minimal residuals to approximate the desired eigenvectors. In com-parison with the Ritz vectors, the refined Ritz vectors are guaranteed to converge theoretically and can converge much faster numerically. In this paper we propose to replace the Ritz values, obtained by the Arnoldi method with respect to a Krylovsubspace, by the ones obtained with respect to the subspace spanned by the refined Ritz vectors. We discuss how to compute these new approximations cheaply and reliably. Theoretical error bounds between the original Ritz values and the new Ritz values are established. Finally, we present a variant of the refined Arnoldi al-gorithm for an augmented Krylov subspace and discuss restarting issue. Numerical results confirm efficiency of the new algorithm.展开更多
For classical orthogonal projection methods for large matrix eigenproblems, it may be much more difficult for a Ritz vector to converge than for its corresponding Ritz value when the matrix in question is non-Hermitia...For classical orthogonal projection methods for large matrix eigenproblems, it may be much more difficult for a Ritz vector to converge than for its corresponding Ritz value when the matrix in question is non-Hermitian. To this end, a class of new refined orthogonal projection methods has been proposed. It is proved that in some sense each refined method is a composite of two classical orthogonal projections, in which each refined approximate eigenvector is obtained by realizing a new one of some Hermitian semipositive definite matrix onto the same subspace. A priori error bounds on the refined approximate eigenvector are established in terms of the sine of acute angle of the normalized eigenvector and the subspace involved. It is shown that the sufficient conditions for convergence of the refined vector and that of the Ritz value are the same, so that the refined methods may be much more efficient than the classical ones.展开更多
A large unsymmetric linear system problem is transformed into the problem of computing the eigenvector of a large symmetric nonnegative definite matrix associated with the eigenvalue zero, i.e., the computation of the...A large unsymmetric linear system problem is transformed into the problem of computing the eigenvector of a large symmetric nonnegative definite matrix associated with the eigenvalue zero, i.e., the computation of the elgenvector of the cross-product matrix of an augmented matrix associated with the eigenvalue zero. The standard Lanczos method and an improved refined Lanczos method are proposed that compute approximate eigenvectors and return approximate solutions of the linear system. An implicitly restarted Lanczos algorithm and its refined version are developed. Theoretical analysis and numerical experiments show the refined method is better than the standard one. If the large matrix has small eigenvalues, the two new algorithms are much faster than the unpreconditioned restarted GMRES.展开更多
The approximate eigenvectors or Ritz vectors obtained by the block Arnoldi method may converge very slowly and even fail to converge even if the approximate eigenvalues do. In order to improve the quality of the Ritz ...The approximate eigenvectors or Ritz vectors obtained by the block Arnoldi method may converge very slowly and even fail to converge even if the approximate eigenvalues do. In order to improve the quality of the Ritz vectors, a modified strategy is proposed such that new approximate eigenvectors are certain combinations of the Ritz vectors and the waSted (m+1) th block basis vector and their corresponding residual norms are minimized in a certain sense. They can be cheaply computed by solving a few small 'dimensional minimization problems. The resulting modified m-step block Arnoldi method is better than the standard m-step one in theory and cheaper than the standard (m+1)-step one. Based on this strategy, a modified m-step iterative block Arnoldi algorithm is presented. Numerical experiments are reported to show that the modified m-step algorithm is often considerably more efficient than the standard (m+1)-step iterative one.展开更多
文摘The refined Arnoldi method proposed by Jia is used for computing some eigen-pairs of large matrices. In contrast to the Arnoldi method, the fundamental dif-ference is that the refined method seeks certain refined Ritz vectors, which aredifferent from the Ritz vectors obtained by the Arnoldi method, from a projection space with minimal residuals to approximate the desired eigenvectors. In com-parison with the Ritz vectors, the refined Ritz vectors are guaranteed to converge theoretically and can converge much faster numerically. In this paper we propose to replace the Ritz values, obtained by the Arnoldi method with respect to a Krylovsubspace, by the ones obtained with respect to the subspace spanned by the refined Ritz vectors. We discuss how to compute these new approximations cheaply and reliably. Theoretical error bounds between the original Ritz values and the new Ritz values are established. Finally, we present a variant of the refined Arnoldi al-gorithm for an augmented Krylov subspace and discuss restarting issue. Numerical results confirm efficiency of the new algorithm.
基金Project supported by the China State Major Key Projects for Basic Researchesthe National Natural Science Foundation of China (Grant No. 19571014)the Doctoral Program (97014113), the Foundation of Excellent Young Scholors of Ministry of Education
文摘For classical orthogonal projection methods for large matrix eigenproblems, it may be much more difficult for a Ritz vector to converge than for its corresponding Ritz value when the matrix in question is non-Hermitian. To this end, a class of new refined orthogonal projection methods has been proposed. It is proved that in some sense each refined method is a composite of two classical orthogonal projections, in which each refined approximate eigenvector is obtained by realizing a new one of some Hermitian semipositive definite matrix onto the same subspace. A priori error bounds on the refined approximate eigenvector are established in terms of the sine of acute angle of the normalized eigenvector and the subspace involved. It is shown that the sufficient conditions for convergence of the refined vector and that of the Ritz value are the same, so that the refined methods may be much more efficient than the classical ones.
文摘A large unsymmetric linear system problem is transformed into the problem of computing the eigenvector of a large symmetric nonnegative definite matrix associated with the eigenvalue zero, i.e., the computation of the elgenvector of the cross-product matrix of an augmented matrix associated with the eigenvalue zero. The standard Lanczos method and an improved refined Lanczos method are proposed that compute approximate eigenvectors and return approximate solutions of the linear system. An implicitly restarted Lanczos algorithm and its refined version are developed. Theoretical analysis and numerical experiments show the refined method is better than the standard one. If the large matrix has small eigenvalues, the two new algorithms are much faster than the unpreconditioned restarted GMRES.
文摘The approximate eigenvectors or Ritz vectors obtained by the block Arnoldi method may converge very slowly and even fail to converge even if the approximate eigenvalues do. In order to improve the quality of the Ritz vectors, a modified strategy is proposed such that new approximate eigenvectors are certain combinations of the Ritz vectors and the waSted (m+1) th block basis vector and their corresponding residual norms are minimized in a certain sense. They can be cheaply computed by solving a few small 'dimensional minimization problems. The resulting modified m-step block Arnoldi method is better than the standard m-step one in theory and cheaper than the standard (m+1)-step one. Based on this strategy, a modified m-step iterative block Arnoldi algorithm is presented. Numerical experiments are reported to show that the modified m-step algorithm is often considerably more efficient than the standard (m+1)-step iterative one.