Format

Send to

Choose Destination
Comput Stat Data Anal. 2018 Aug;124:235-251. doi: 10.1016/j.csda.2018.03.006. Epub 2018 Mar 28.

A Forward and Backward Stagewise Algorithm for Nonconvex Loss Functions with Adaptive Lasso.

Author information

1
Department of Statistics, Nanjing Univ6ersity of Finance and Economics.
2
Department of Biostatistics, University of Iowa.
3
Department of Applied Mathematics, The HongKong Polytechnic University.
4
Department of Biostatistics, Yale School of Public Health.

Abstract

Penalization is a popular tool for multi- and high-dimensional data. Most of the existing computational algorithms have been developed for convex loss functions. Nonconvex loss functions can sometimes generate more robust results and have important applications. Motivated by the BLasso algorithm, this study develops the Forward and Backward Stagewise (Fabs) algorithm for nonconvex loss functions with the adaptive Lasso (aLasso) penalty. It is shown that each point along the Fabs paths is a δ-approximate solution to the aLasso problem and the Fabs paths converge to the stationary points of the aLasso problem when δ goes to zero, given that the loss function has second-order derivatives bounded from above. This study exemplifies the Fabs with an application to the penalized smooth partial rank (SPR) estimation, for which there is still a lack of effective algorithm. Extensive numerical studies are conducted to demonstrate the benefit of penalized SPR estimation using Fabs, especially under high-dimensional settings. Application to the smoothed 0-1 loss in binary classification is introduced to demonstrate its capability to work with other differentiable nonconvex loss function.

KEYWORDS:

Adaptive Lasso; Forward and backward stagewise; Nonconvex loss; Penalization

PMID:
30319163
PMCID:
PMC6181148
[Available on 2019-08-01]
DOI:
10.1016/j.csda.2018.03.006

Supplemental Content

Loading ...
Support Center