Department of Mathematics,
University of California San Diego
****************************
Math 208: Seminar in Algebraic Geometry
Dr. Jihao Liu
School of Mathematical Sciences, Peking University
On the termination of flips for varieties of general type
Abstract:
Termination of flips is a central question in birational geometry and the minimal model program. In this talk, I will discuss recent progress on the termination of flips for varieties X of general type—that is, when the canonical divisor $K_X$ is big. Our main result shows that many birational invariants, particularly the local volume (normalized volume), are bounded under any sequence of steps of general type MMPs. As a consequence, we prove the termination of flips for five folds of general type. This is joint work with Jingjun Han, Lu Qi, and Ziquan Zhuang.
-
APM 7218
APM 7218
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 243: Functional Analysis Seminar
Hans Wenzl
UCSD
Subfactors and tensor categories
Abstract:
We give an introductory talk about the interplay between the study of subfactors and tensor categories. We will sketch some recent results, time permitting.
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 288: Probability & Statistics
Benedikt Stufler
TU Vienna
Inhomogeneous scaling limits of random supertrees
Abstract:
We discuss recent results on Gibbs partitions and their application to the study of random supertrees and their novel inhomogeneous scaling limits.
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 278B: Mathematics of Information, Data, and Signals
Erin George
UCSD
Benign overfitting in leaky ReLU networks with moderate input dimension
Abstract:
The problem of benign overfitting asks whether it is possible for a model to perfectly fit noisy training data and still generalize well. We study benign overfitting in two-layer leaky ReLU networks trained with the hinge loss on a binary classification task. We consider input data that can be decomposed into the sum of a common signal and a random noise component, that lie on subspaces orthogonal to one another. We characterize conditions on the signal to noise ratio (SNR) of the model parameters giving rise to benign versus non-benign (or harmful) overfitting: in particular, if the SNR is high then benign overfitting occurs, conversely if the SNR is low then harmful overfitting occurs. We attribute both benign and non-benign overfitting to an approximate margin maximization property and show that leaky ReLU networks trained on hinge loss with gradient descent (GD) satisfy this property. In contrast to prior work we do not require the training data to be nearly orthogonal. Notably, for input dimension d and training sample size n, while results in prior work require $d=\Omega(n^2 \log n)$, here we require only $d=\Omega(n)$.
-
APM 6402
APM 6402
****************************

