Technical Program

Paper Detail

Paper Title Fitting ReLUs via SGD and Quantized SGD
Paper IdentifierFR1.R1.1
Authors Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, Amir Salman Avestimehr, University of Southern California, United States
Session Function Approximation and Learning
Location Le Théatre (Parterre), Level -1
Session Time Friday, 12 July, 09:50 - 11:10
Presentation Time Friday, 12 July, 09:50 - 10:10
Manuscript  Click here to download the manuscript
Abstract In this paper we focus on the problem of finding the optimal weights of the shallowest of neural networks consisting of a single Rectified Linear Unit (ReLU). We focus on a planted model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to a planted weight vector. We first show that mini-batch stochastic gradient descent when suitably initialized, converges at a geometric rate to the planted model with a number of samples that is optimal up to numerical constants. Next we focus on a parallel implementation where in each iteration the mini-batch gradient is calculated in a distributed manner across multiple processors and then broadcast to a master or all other processors. To reduce the communication cost in this setting we utilize a Quanitzed Stochastic Gradient Scheme (QSGD) where the partial gradients are quantized. Perhaps unexpectedly, we show that QSGD maintains the fast convergence of SGD to a globally optimal model while significantly reducing the communication cost. We further corroborate our numerical findings via various experiments including distributed implementations over Amazon EC2.