Format

Send to

Choose Destination
J Am Stat Assoc. 2011;106(495):1125-1138.

SparseNet: Coordinate Descent With Nonconvex Penalties.

Author information

1
Ph.D. Student, Department of Statistics, Stanford University, Stanford, CA 94305.
2
Professor Emeritus, Department of Statistics, Stanford University, Stanford, CA 94305.
3
Professor, Departments of Statistics and Health, Research and Policy, Stanford University, Stanford, CA 94305.

Abstract

We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. In this article we pursue a coordinate-descent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a df-standardizing reparametrization that assists our pathwise algorithm. The MC+ penalty is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. Certain technical derivations and experiments related to this article are included in the Supplementary Materials section.

KEYWORDS:

Degrees of freedom; LASSO; Nonconvex optimization; Regularization surface; Sparse regression; Variable selection

Supplemental Content

Full text links

Icon for PubMed Central
Loading ...
Support Center