We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm a...We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm and l2-norm of the optimization variables.This class of problems arise naturally from applications in sparse group Lasso,which is a popular technique for variable selection.An effective approach to solve such problems is by the Proximal Gradient Method(PGM).In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function.展开更多
基金This work was partially supported by the National Natural Science Foundation of China(Nos.61179033,DMS-1015346)。
文摘We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm and l2-norm of the optimization variables.This class of problems arise naturally from applications in sparse group Lasso,which is a popular technique for variable selection.An effective approach to solve such problems is by the Proximal Gradient Method(PGM).In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function.