Printable PDF
Department of Mathematics,
University of California San Diego

****************************

Math 296 - Graduate Student Colloquium

Prof. Rayan Saab

UC San Diego

Stochastic algorithms for quantizing neural networks

Abstract:

Neural networks are highly non-linear functions often parametrized by a staggering number of weights. Miniaturizing these networks and implementing them in hardware is a direction of research that is fueled by a practical need, and at the same time connects to interesting mathematical problems. For example, by quantizing, or replacing the weights of a neural network with quantized (e.g., binary) counterparts, massive savings in cost, computation time, memory, and power consumption can be attained. Of course, one wishes to attain these savings while preserving the action of the function on domains of interest.

We discuss connections to problems in discrepancy theory, present data-driven and computationally efficient stochastic methods for quantizing the weights of already trained neural networks and we prove that our methods have favorable error guarantees under a variety of assumptions.  

Host: Jon Novak

March 6, 2024

3:00 PM

HSS 4025

****************************