Technical Program

Paper Detail

Paper Title An Information Theoretic Interpretation to Deep Neural Networks
Paper IdentifierTH2.R3.4
Authors Shao-Lun Huang, Tsinghua-Berkeley Shenzhen Institute, China; Xiangxiang Xu, Tsinghua University, China; Lizhong Zheng, Gregory W. Wornell, Massachusetts Institute of Technology, United States
Session Neural Networks and AI
Location Monge, Level 3
Session Time Thursday, 11 July, 11:40 - 13:00
Presentation Time Thursday, 11 July, 12:40 - 13:00
Manuscript  Click here to download the manuscript
Abstract It is commonly believed that the hidden layers of deep neural networks (DNNs) attempt to extract informative features for learning tasks. In this paper, we formalize this intuition by showing that the features extracted by DNN coincide with the result of an optimization problem, which we call the ``universal feature selection'' problem, in a local analysis regime. We interpret the weights training in DNN as the projection of feature functions between feature spaces, specified by the network structure. Our formulation has direct operational meaning in terms of the performance for inference tasks, and gives interpretations to the internal computation results of DNNs. Results of numerical experiments are provided to support the analysis.