Technical Program

Paper Detail

Paper Title Learning Feature Nonlinearities with Regularized Binned Regression
Paper IdentifierWE1.R3.4
Authors Samet Oymak, University of California, Riverside, United States; Mehrdad Mahdavi, Pennslyvania State University, United States; Jiasi Chen, University of California, Riverside, United States
Session Regression and Estimation
Location Monge, Level 3
Session Time Wednesday, 10 July, 09:50 - 11:10
Presentation Time Wednesday, 10 July, 10:50 - 11:10
Manuscript  Click here to download the manuscript
Abstract For various applications, the relations between the dependent and independent variables are highly nonlinear. Consequently, for large scale complex problems, neural networks and regression trees are commonly preferred over linear models such as Lasso. This work proposes learning the feature nonlinearities by binning feature values and finding the best fit in each quantile using non-convex regularized linear regression. The algorithm first captures the dependence between neighboring quantiles by enforcing smoothness via piecewise-constant/linear approximation and then selects a sparse subset of good features. We prove that the proposed algorithm is statistically and computationally efficient. In particular, it achieves linear rate of convergence while requiring near-minimal number of samples. Evaluations on real datasets demonstrate that algorithm is competitive with current state-of-the-art and accurately learns feature nonlinearities.