期刊文献+

Accelerating large partial EVD/SVD calculations by filtered block Davidson methods

Accelerating large partial EVD/SVD calculations by filtered block Davidson methods
原文传递
导出
摘要 Partial eigenvalue decomposition(PEVD) and partial singular value decomposition(PSVD) of large sparse matrices are of fundamental importance in a wide range of applications, including latent semantic indexing, spectral clustering, and kernel methods for machine learning. The more challenging problems are when a large number of eigenpairs or singular triplets need to be computed. We develop practical and efficient algorithms for these challenging problems. Our algorithms are based on a filter-accelerated block Davidson method.Two types of filters are utilized, one is Chebyshev polynomial filtering, the other is rational-function filtering by solving linear equations. The former utilizes the fastest growth of the Chebyshev polynomial among same degree polynomials; the latter employs the traditional idea of shift-invert, for which we address the important issue of automatic choice of shifts and propose a practical method for solving the shifted linear equations inside the block Davidson method. Our two filters can efficiently generate high-quality basis vectors to augment the projection subspace at each Davidson iteration step, which allows a restart scheme using an active projection subspace of small dimension. This makes our algorithms memory-economical, thus practical for large PEVD/PSVD calculations. We compare our algorithms with representative methods, including ARPACK, PROPACK, the randomized SVD method, and the limited memory SVD method. Extensive numerical tests on representative datasets demonstrate that, in general, our methods have similar or faster convergence speed in terms of CPU time, while requiring much lower memory comparing with other methods. The much lower memory requirement makes our methods more practical for large-scale PEVD/PSVD computations. Partial eigenvalue decomposition (PEVD) and partial singular value decomposition (PSVD) of large sparse matrices are of fundamental importance in a wide range of applications, including latent semantic index- ing, spectral clustering, and kernel methods for machine learning. The more challenging problems are when a large number of eigenpairs or singular triplets need to be computed. We develop practical and efficient algo- rithms for these challenging problems. Our algorithms are based on a filter-accelerated block Davidson method. Two types of filters are utilized, one is Chebyshev polynomial filtering, the other is rational-function filtering by solving linear equations. The former utilizes the fastest growth of the Chebyshev polynomial among same degree polynomials; the latter employs the traditional idea of shift-invert, for which we address the important issue of automatic choice of shifts and propose a practical method for solving the shifted linear equations inside the block Davidson method. Our two filters can efficiently generate high-quality basis vectors to augment the projection subspace at each Davidson iteration step, which allows a restart scheme using an active projection subspace of small dimension. This makes our algorithms memory-economical, thus practical for large PEVD/PSVD calculations. We compare our algorithms with representative methods, including ARPACK, PROPACK, the randomized SVD method, and the limited memory SVD method. Extensive numerical tests on representative datasets demonstrate that, in general, our methods have similar or faster convergence speed in terms of CPU time, while requiring much lower memory comparing with other methods. The much lower memory requirement makes our methods more practical for large-scale PEVD/PSVD computations.
出处 《Science China Mathematics》 SCIE CSCD 2016年第8期1635-1662,共28页 中国科学:数学(英文版)
基金 supported by National Science Foundation of USA (Grant Nos. DMS1228271 and DMS-1522587) National Natural Science Foundation of China for Creative Research Groups (Grant No. 11321061) the National Basic Research Program of China (Grant No. 2011CB309703) the National Center for Mathematics and Interdisciplinary Sciences, Chinese Academy of Sciences
关键词 partial EVD/SVD polynomial filter rational filter kernel graph 计算时间 SVD方法 戴维森 过滤器 奇异值分解方法 EVD 特征值分解 线性方程组
  • 相关文献

参考文献76

  • 1Bache K, Lichman M. UCI machine learning repository, http://archive.ics.uci.edu/ml, 2013. 被引量:1
  • 2Baer R, Head-Cordon M. Chebyshev expansion methods for electronic structure calculations on large molecular sys- tems. J Chem Phys, 1997, 107:10003 10013. 被引量:1
  • 3Baglama J, Calvetti D, Reichel L. IRBL: An implicitly restarted block-Lanczos method for large-scale Hermitian eigenproblems. SIAM J Sci Comput, 2003, 24:1650-1677. 被引量:1
  • 4Bai Z, Demmel J, Dongarra J, et al. Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. Philadelphia: SIAM, 2000. 被引量:1
  • 5Belabbas M-A, Wolfe P J. Spectral methods in machine learning and new strategies for very large datasets. Proc Natl Acad Sci USA, 2009, 106:369-374. 被引量:1
  • 6Berns-Miiller J, Graham I G, Spence A. Inexact inverse iteration for symmetric matrices. Linear Algebra Appl, 2006, 416:389-413. 被引量:1
  • 7Cai D, He X, Han J. SpectrM regression: A unified subspace learning framework for content-based image retrieval. In: Proceedings of the 15th International Conference on Multimedia. New York: ACM, 2007, 403-412. 被引量:1
  • 8Cai J F, Cand~s E J, Shen Z. A singular value thresholding algorithm for matrix completion. SIAM J Optim, 2010, 20:1956-1982. 被引量:1
  • 9Calvetti D, Reichel L, Sorensen D C. An implicit restarted Lanczos method for large symmetric eigenvalue problem. Elec Trans Numer Anal, 1994, 1:237-263. 被引量:1
  • 10Chebyshev P L. Sur los fonetions qui s'~eavtent peu de TAro pour certaines valeurs de la variable. Oeuvreas, 1881, 2: 335-356. 被引量:1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部