Incremental Proximal and Augmented Lagrangian Methods for Convex Optimization: A Survey

Abstract:

Incremental methods deal effectively with an optimization problem of great importance in machine learning, signal processing, and large-scale and distributed optimization: the minimization of the sum of a large number of convex functions. We survey these methods and we propose incremental  aggregated and nonaggregated versions of the proximal algorithm. Under cost function differentiability and strong convexity assumptions, we show linear convergence for a sufficiently small constant stepsize. This result also applies to distributed asynchronous variants of the method, involving bounded interprocessor communication delays.

We then consider dual versions of incremental proximal algorithms, which are incremental augmented Lagrangian methods for separable equality-constrained optimization problems. Contrary to the standard augmented Lagrangian method, these methods admit decomposition in the minimization of the augmented Lagrangian, and update the multipliers far more frequently. Our incremental aggregated augmented Lagrangian methods bear similarity to several known decomposition algorithms, including the  alternating direction method of multipliers (ADMM) and more recent variations. We compare these methods in terms of their properties, and highlight their potential advantages and limitations.

Date: Mar 30, 2016 at 16:30 h
Venue: Beauchef 851, Torre Norte, Piso 7, Sala de Seminarios John Von Neumann CMM.
Speaker: Prof. Dimitri Bertsekas
Affiliation: Massachusetts Institute of Technology, USA
Coordinator: Prof. Abderrahim Hantoute
Abstract:
PDF

Posted on Mar 21, 2016 in Optimization and Equilibrium, Seminars