Inspired by an interesting idea of Cai and Zhang,we formulate and prove the convex k-sparse decomposition of vectors that is invariant with respect to the l_(1) norm.This result fits well in discussing compressed sens...Inspired by an interesting idea of Cai and Zhang,we formulate and prove the convex k-sparse decomposition of vectors that is invariant with respect to the l_(1) norm.This result fits well in discussing compressed sensing problems under the Restricted Isometry property,but we believe it also has independent interest.As an application,a simple derivation of the RIP recovery conditionδk+θk,k<1 is presented.展开更多
Graph sparsification is to approximate an arbitrary graph by a sparse graph and is useful in many applications,such as simplification of social networks,least squares problems,and numerical solution of symmetric posit...Graph sparsification is to approximate an arbitrary graph by a sparse graph and is useful in many applications,such as simplification of social networks,least squares problems,and numerical solution of symmetric positive definite linear systems.In this paper,inspired by the well-known sparse signal recovery algorithm called orthogonal matching pursuit(OMP),we introduce a deterministic,greedy edge selection algorithm,which is called the universal greedy approach(UGA)for the graph sparsification problem.For a general spectral sparsification problem,e.g.,the positive subset selection problem from a set of m vectors in R n,we propose a nonnegative UGA algorithm which needs O(mn^(2)+n^(3)/ϵ^(2))time to find a 1+ϵ/β/1-ϵ/β-spectral sparsifier with positive coefficients with sparsity at most[n/ϵ^(2)],where β is the ratio between the smallest length and largest length of the vectors.The convergence of the nonnegative UGA algorithm is established.For the graph sparsification problem,another UGA algorithm is proposed which can output a 1+O(ϵ)/1-O(ϵ)-spectral sparsifier with[n/ϵ^(2)]edges in O(m+n^(2)/ϵ^(2))time from a graph with m edges and n vertices under some mild assumptions.This is a linear time algorithm in terms of the number of edges that the community of graph sparsification is looking for.The best result in the literature to the knowledge of the authors is the existence of a deterministic algorithm which is almost linear,i.e.O(m^(1+o(1)))for some o(1)=O((log log(m))^(2/3)/log^(1/3)(m)).Finally,extensive experimental results,including applications to graph clustering and least squares regression,show the effectiveness of proposed approaches.展开更多
We consider the rank minimization problem from quadratic measurements,i.e.,recovering a rank r matrix X 2 Rn×r from m scalar measurements yi=ai XX⊤ai,ai 2 Rn,i=1,...,m.Such problem arises in a variety of applicat...We consider the rank minimization problem from quadratic measurements,i.e.,recovering a rank r matrix X 2 Rn×r from m scalar measurements yi=ai XX⊤ai,ai 2 Rn,i=1,...,m.Such problem arises in a variety of applications such as quadratic regression and quantum state tomography.We present a novel algorithm,which is termed exponential-type gradient descent algorithm,to minimize a non-convex objective function f(U)=14m Pm i=1(yi−a⊤i UU⊤ai)2.This algorithm starts with a careful initialization,and then refines this initial guess by iteratively applying exponential-type gradient descent.Particularly,we can obtain a good initial guess of X as long as the number of Gaussian random measurements is O(nr),and our iteration algorithm can converge linearly to the true X(up to an orthogonal matrix)with m=O(nr log(cr))Gaussian random measurements。展开更多
基金This research was partially supported by the National 973 Project of China(No.2013CB834205)Zhiqiang Xu was supported by National Natural Science foundation of China(Nos.11171336,11331012,11021101)National Basic Research Program of China(973 Program No.2010CB832702).
文摘Inspired by an interesting idea of Cai and Zhang,we formulate and prove the convex k-sparse decomposition of vectors that is invariant with respect to the l_(1) norm.This result fits well in discussing compressed sensing problems under the Restricted Isometry property,but we believe it also has independent interest.As an application,a simple derivation of the RIP recovery conditionδk+θk,k<1 is presented.
基金supported by NSFC grant(Nos.12001026,12071019)supported by the National Science Fund for Distinguished Young Scholars grant(No.12025108)+1 种基金Beijing Natural Science Foundation(No.Z180002)NSFC grant(Nos.12021001,11688101).
文摘Graph sparsification is to approximate an arbitrary graph by a sparse graph and is useful in many applications,such as simplification of social networks,least squares problems,and numerical solution of symmetric positive definite linear systems.In this paper,inspired by the well-known sparse signal recovery algorithm called orthogonal matching pursuit(OMP),we introduce a deterministic,greedy edge selection algorithm,which is called the universal greedy approach(UGA)for the graph sparsification problem.For a general spectral sparsification problem,e.g.,the positive subset selection problem from a set of m vectors in R n,we propose a nonnegative UGA algorithm which needs O(mn^(2)+n^(3)/ϵ^(2))time to find a 1+ϵ/β/1-ϵ/β-spectral sparsifier with positive coefficients with sparsity at most[n/ϵ^(2)],where β is the ratio between the smallest length and largest length of the vectors.The convergence of the nonnegative UGA algorithm is established.For the graph sparsification problem,another UGA algorithm is proposed which can output a 1+O(ϵ)/1-O(ϵ)-spectral sparsifier with[n/ϵ^(2)]edges in O(m+n^(2)/ϵ^(2))time from a graph with m edges and n vertices under some mild assumptions.This is a linear time algorithm in terms of the number of edges that the community of graph sparsification is looking for.The best result in the literature to the knowledge of the authors is the existence of a deterministic algorithm which is almost linear,i.e.O(m^(1+o(1)))for some o(1)=O((log log(m))^(2/3)/log^(1/3)(m)).Finally,extensive experimental results,including applications to graph clustering and least squares regression,show the effectiveness of proposed approaches.
基金The research of the second author was supported by NSFC grant(91630203,11688101)by Youth Innovation Promotion Association CAS,Beijing Natural Science Foundation(Z180002)by National Basic Research Program of China(973 Program 2015CB856000).
文摘We consider the rank minimization problem from quadratic measurements,i.e.,recovering a rank r matrix X 2 Rn×r from m scalar measurements yi=ai XX⊤ai,ai 2 Rn,i=1,...,m.Such problem arises in a variety of applications such as quadratic regression and quantum state tomography.We present a novel algorithm,which is termed exponential-type gradient descent algorithm,to minimize a non-convex objective function f(U)=14m Pm i=1(yi−a⊤i UU⊤ai)2.This algorithm starts with a careful initialization,and then refines this initial guess by iteratively applying exponential-type gradient descent.Particularly,we can obtain a good initial guess of X as long as the number of Gaussian random measurements is O(nr),and our iteration algorithm can converge linearly to the true X(up to an orthogonal matrix)with m=O(nr log(cr))Gaussian random measurements。