Printable PDF
Department of Mathematics,
University of California San Diego

****************************

Math 288 - Probability & Statistics

Prof. Rahul Parhi

UC San Diego (ECE Department)

Characteristic Functionals and the Innovations Approach to Stochastic Processes With Applications to Random Neural Networks

Abstract:

Many stochastic processes (such as the full family of Lévy processes) can be linearly and deterministically transformed into a white noise process. Consequently these processes can be viewed as the deterministic "mixing" of a white noise process. This formulation is the so-called "innovation model" of Bode, Shannon, and Kailath (ca. 1950-1970), where the white noise represents the stochastic part of the process, called its innovation. This allows for a conceptual decoupling between the correlation properties of a process (which are imposed by the whitening operator L) and its underlying randomness, which is determined by its innovation. This reduces the study of a stochastic process to the study of its underlying innovation. In this talk, we will introduce the innovations approach to studying stochastic processes and adopt the beautiful formalism of generalized stochastic processes of Gelfand (ca. 1955), where stochastic processes are viewed as random tempered distributions (more generally, random variables that take values in the dual of a nuclear space). This formulation uses the so-called characteristic functional (infinite-dimensional analog of the characteristic function of a random variable) of a stochastic process in lieu of more traditional concepts such as random measures and Itô integrals. A by-product of this formulation is that the characteristic functional of any stochastic process that satisfies the innovation model can be readily derived, providing a complete description of its law. We will then discuss some of my recent work where we have derived the characteristic functional of random neural networks to study their properties. This setting will reveal the true power of the characteristic functional: Any property of a stochastic process can be derived with short and simple proofs. For example, we will show that, as the "width" of these random neural networks tends to infinity, these processes can converge in law not only to Gaussian processes, but also to non-Gaussian processes depending on the law of the parameters. Our asymptotic results provide a new take on several classical results that have appeared in the machine learning community (wide networks converge to Gaussian processes) as well as some new ones (wide networks can converge to non-Gaussian processes). This talk is based on joint work with Pakshal Bohra, Ayoub El Biari, Mehrsa Pourya, and Michael Unser from our recent preprint arxiv:2405.10229.

February 6, 2025

11:00 AM

APM 6402

Research Areas

Probability Theory

****************************