9:30 - 10:00 Coffee

- Morning Session Talks (10:00 - 12:00)
- 9:50 - 10:00: Bill Helton (UCSD)
*Welcome address: what is SCOD ?* - 10:00 - 10:50:Tom Bewley (UCSD)
*Incorporating Regular Lattices and Accounting for Approximate Function Evaluations in Derivative-Free Optimization*Abstract: Systems characterized by expensive, nonconvex, noisy functions in moderate dimensions (n=2 to 24) necessitate the development of maximally efficient derivative-free optimization algorithms. Starting with the well-known Surrogate Management Framework (SMF), our lab has developed a new, highly efficient derivative-free optimization algorithm, which we dub LAttice-Based Derivative-free Optimization via Global Surrogates (LABDOGS). This algorithm combines a highly efficient, globally convergent surrogate-based Search algorithm with an efficient Poll which incorporates a minimum number of new function evaluations chosen from nearest-neighbor points. All function evaluations are coordinated with highly uniform noncartesian lattices derived from n-dimensional sphere packing theory. Representative numerical tests confirm significant improvements in convergence of lattice-based strategies as compared with otherwise identical codes coordinated using Cartesian grids.

The second topic of our talk focuses on incorporating approximate function evaluations into a surrogate-based optimization scheme. Assuming the accuracy of each function evaluation in a statistical setting diminishes towards zero in proportion with the reciprocal of the square root of the sample time, we have developed an algorithm for sampling the function only as accurately as warranted. The algorithm we have developed, dubbed $\alpha$-DOGS, maintains the globally convergent behavior of the LABDOGS Search while focusing the bulk of the computation time on regions of parameter space where the existing approximate function evaluations indicate that the true function minimum might lie.

*10:50 - 11:00: Coffee Break**11:00 - 11:30 Martin Harrison (UCSB)**Minimal Sums of Squares in a free *-algebra*Abstract: In this talk, I discuss the reduction of the number of squares needed to express a sum of squares in the free *-algebra R

. I will give examples of sums which are irreducible in this sense, and prove bounds on the minimal number of terms needed to express an arbitrary sum of squares of a given degree in a given number of variables. *11:30 - 12:00 Shaowei Lin (UCB)**Polynomial Relations among Principal Minors of a Matrix*Abstract: Let P_n be the prime ideal of all polynomial relations among the principal minors of an n x n matrix. Related to the Principal Minor Assignment Problem formulated by Holtz and Schneider is the problem of finding a finite set of generators for P_n. While this elimination-type problem is in theory solvable by Groebner bases techniques, it is computationally expensive even for the case n=4. For instance, hyperdeterminantal relations among the principal minors of a symmetric 4 x 4 matrix were discovered by Holtz and Sturmfels. In this talk, we will show how to find generators in the general 4 x 4 case by considering products of the matrix entries called cycles. These cycles also allow us to prove that the image of the principal minor map is closed for all n. We will describe a group action on the principal minors that leaves P_n invariant, and discuss the representation theoretic consequences. (joint work with Bernd Sturmfels)

- 9:50 - 10:00: Bill Helton (UCSD)
*12:00 - 1:30 Lunch Break**Afternoon Session Talks (2:00 - 5:00)*- 1:40-2:30: Gert Lanckriet (UCSD)
*The Sparse Eigenvalue Problem* - 2:40-3:30: Emre Mengi (UCSD)
*Lipschitz-based optimization of singular values*Abstract: Singular value optimization problems arise in various applications in control theory. For instance the $H_{\infty}$ norm of the transfer function of a linear dynamical system, and the distance problems such as complex (or real) stability and controllability radii have singular value optimization characterizations. These problems are non-convex and non-smooth. The existing commonly employed algorithms for these problems are derivative-free, but do not exploit the Lipschitz nature of singular values in a systematic manner. Here we solve these problems largely depending on a Lipschitz optimization algorithm due to Piyavskii and Shubert, that never got attention in the context of optimization of eigenvalues or singular values. The Piyavskii-Shubert based algorithm outperforms the commonly employed algorithms for medium to large scale problems when a few digit accuracy is sought.

- 3:30 - 4:00: Coffee Break
- 4:00-5:00 Paul Tseng (Departmental Colloqium, Univ. Washington)
*On SDP and ESDP Relaxation for Sensor Network Localization*Abstract: Recently Wang, Zheng, Boyd, and Ye proposed a further convex relaxation of the SDP relaxation for the sensor network localization problem, which they called edge-based SDP (ESDP). The ESDP is easier to solve than the SDP and, in simulation, yields solution about as accurate as the SDP relaxation. We show that, when the distance measurements are exact, we can determine which sensors are correctly positioned in the ESDP solution by checking if their individual traces are zero. On the other hand, we show that, when the distance measurements are inexact, this check is unreliable for both ESDP and SDP solutions. We then propose a robust version of ESDP relaxation for which small individual trace is a reliable check of sensor position accuracy. Moreover, the position error for such a sensor is in the order of the square root of its trace. Lastly, we propose a coordinate gradient descent method, using log-barrier penalty, to solve ESDP. This method is more efficient than interior-point method for solving SDP or ESDP and is implementable in a distributed manner. (This is joint work with Ting Kei Pong.)

- 1:40-2:30: Gert Lanckriet (UCSD)