Technical Program

Paper Detail

Paper Title Local Geometry of Cross Entropy Loss in Learning One-Hidden-Layer Neural Networks
Paper IdentifierTH2.R3.1
Authors Haoyu Fu, The Ohio State University, United States; Yuejie Chi, Carnegie Mellon University, United States; Yinbin Liang, The Ohio State University, United States
Session Neural Networks and AI
Location Monge, Level 3
Session Time Thursday, 11 July, 11:40 - 13:00
Presentation Time Thursday, 11 July, 11:40 - 12:00
Manuscript  Click here to download the manuscript
Abstract We study model recovery for data classification, where the training labels are generated from a one-hidden-layer neural network with sigmoid activations, and the goal is to recover the weights of the neural network. We consider two network models, the fully-connected network (FCN) and the non-overlapping convolutional neural network (CNN). We prove that with Gaussian inputs, the empirical risk based on cross entropy exhibits strong convexity and smoothness {\em uniformly} in a local neighborhood of the ground truth, as soon as the sample complexity is sufficiently large. Hence, if initialized in this neighborhood, it establishes the local convergence guarantee for empirical risk minimization using cross entropy via gradient descent for learning one-hidden-layer neural networks, at the near-optimal sample and computational complexity with respect to the network input dimension without unrealistic assumptions such as requiring a fresh set of samples at each iteration.