Printable PDF
Department of Mathematics,
University of California San Diego

****************************

Math 278B - Mathematics of Information, Data, and Signals Seminar

Yaniv Plan

University of British Columbia

A family of measurement matrices for generalized compressed sensing

Abstract:

We consider the problem of recovering a structured signal x that lies close to a subset of interest T in $R^n$, from its random noisy linear measurements y = B A x + w, i.e., a generalization of the classic compressed sensing problem. Above, B is a fixed matrix and A has independent sub-gaussian rows. By varying B, and the sub-gaussian distribution of A, this gives a family of measurement matrices which may have heavy tails, dependent rows and columns, and singular values with large dynamic range. Typically, generalized compressed sensing assumes a random measurement matrix with nearly uniform singular values (with high probability), and asks: How many measurements are needed to recover x? In our setting, this question becomes: What properties of B allow good recovery? We show that the “effective rank'' of B may be used as a surrogate for the number of measurements, and if this exceeds the squared Gaussian complexity of T-T then accurate recovery is guaranteed. We also derive the optimal dependence on the sub-gaussian norm of the rows of A, to our knowledge this was not known previously even in the case when B is the identity. We allow model mismatch and inexact optimization with the aim of creating an easily accessible theory that specializes well to the case when T is the range of an expansive neural net.

Host: Rayan Saab

July 22, 2021

11:30 AM

Zoom link: https://msu.zoom.us/j/96421373881 (passcode: first prime number $>$ 100)

****************************