Printable PDF
Department of Mathematics,
University of California San Diego

****************************

Dissertation Defense

Jeb Runnoe

UC San Diego

Second-Derivative SQP Methods for Large-Scale Nonconvex Optimization

Abstract:

The class of stabilized sequential quadratic programming (SQP) methods for nonlinearly constrained optimization solve a sequence of related quadratic programming (QP) subproblems formed from a two-norm penalized quadratic model of the Lagrangian function subject to shifted, linearized constraints. While these methods have been shown to exhibit superlinear local convergence even when the constraint Jacobian is rank deficient at the solution, they generally have no global convergence theory. To address this, primal-dual SQP methods (pdSQP) employ a certain primal-dual augmented Lagrangian merit function and solve a subproblem that consists of bound-constrained minimization of a quadratic model of the merit function. The model of the merit function is constructed in a particular way so that the resulting primal-dual subproblem is equivalent to the stabilized SQP subproblem. Together with a flexible line-search, the use of the merit function guarantees convergence from any starting point, while the connection with the stabilized subproblem allows pdSQP to retain the superlinear local convergence that is characteristic of stabilized SQP methods. 

A new dynamic convexification framework is developed that is applicable for nonconvex general standard form, stabilized, and primal-dual bound-constrained QP subproblems. Dynamic convexification involves three distinct stages: pre-convexification, concurrent convexification and post-convexification. New techniques are derived and analyzed for the implicit modification of symmetric indefinite factorizations and for the imposition of temporary artificial constraints, both of which are suitable for pre-convexification. Concurrent convexification works synchronously with the active-set method solving the subproblem, and computes minimal modifications needed to ensure the QP iterates are uniformly bounded. Finally, post-convexification defines an implicit modification that ensures the solution of the subproblem yields a descent direction for the merit function.

A new exact second-derivative primal-dual SQP method (dcpdSQP) is formulated for large-scale nonconvex optimization. Convergence analysis is presented that demonstrates guaranteed global convergence. Extensive numerical testing indicates that the performance of the proposed method is comparable or better than conventional full convexification while significantly reducing the number of factorizations required.

Advisor: Philip Gill

June 3, 2024

3:00 PM

Zoom: https://ucsd.zoom.us/j/97091313956

****************************