Technical Program

Paper Detail

Paper Title Sample Complexity Bounds for Low-Separation-Rank Dictionary Learning
Paper IdentifierTH4.R1.2
Authors Mohsen Ghassemi, Zahra Shakeri, Waheed U. Bajwa, Anand D. Sarwate, Rutgers, The State University of New Jersey, United States
Session Learning and Regression
Location Le Théatre (Parterre), Level -1
Session Time Thursday, 11 July, 16:40 - 18:00
Presentation Time Thursday, 11 July, 17:00 - 17:20
Manuscript  Click here to download the manuscript
Abstract This work addresses the problem of "structured" dictionary learning for computing sparse representations of tensor-structured data. It introduces a low-separation-rank dictionary learning (LSR-DL) model that better captures the structure of tensor data by generalizing the "separable" dictionary learning model. A dictionary with $p$ columns that is generated from the LSR-DL model is shown to be locally identifiable from noisy observations with recovery error at most ρ given that the number of training samples scales with (\# of degrees of freedom in the dictionary)$\times p^2 \rho^{-2}$.