Department of Mathematics,
University of California San Diego
****************************
Special Colloquium
Virginia Williams
Berkeley/Stanford
Multiplying Matrices Faster
Abstract:
Matrix multiplication is used in a large variety of applications throughout science and engineering. It is a bottleneck for many important algorithms: for standard linear algebra problems such as solving linear systems and matrix inversion, but also for many algorithms that, on the face of it, may have nothing to do with matrices. Developing better matrix multiplication algorithms is of immense interest. In 1969 Strassen showed that the naive algorithm for multiplying matrices is not optimal, presenting an ingenious recursive algorithm. This spawned a long line of active research on the theory of matrix multiplication algorithms. In a seminal paper from 1987, Coppersmith and Winograd designed an algorithm that can multiply two n by n matrices using $O(n^{2.376})$ arithmetic operations. This algorithm had remained the theoretically fastest approach for matrix multiplication for 24 years. We have recently been able to design an algorithm that multiplies n by n matrices and uses at most $O(n^{2.373})$ arithmetic operations, thus improving the Coppersmith-Winograd running time. The improvement is based on a recursive application of the original Coppersmith-Winograd construction, together with a general theorem that reduces the final search for the best algorithm to solving a nonlinear constraint program. The analysis is then done by numerically solving this program. In this talk, we will give intuition for the problem and highlight the key ideas needed to obtain the improvement.
Sam Buss
February 11, 2013
2:00 PM
AP&M 6402
****************************