Charla 1:
Abstract: We study statistical estimators computed using iterative optimization methods that are not run until completion. Classical results on maximum likelihood estimators (MLEs) assert that a one-step estimator (OSE), in which a single Newton-Raphson iteration is performed from a starting point with certain properties, is asymptotically equivalent to the MLE. We further develop these early-stopping results by deriving properties of one-step estimators defined by a single iteration of scaled proximal methods. Our main results show the asymptotic equivalence of the likelihood-based estimator and various one-step estimators defined by scaled proximal methods. By interpreting OSEs as the last of a sequence of iterates, our results provide insight on scaling numerical tolerance with sample size. Our setting contains scaled proximal gradient descent applied to certain composite models as a special case, making our results applicable to many problems of practical interest. Additionally, our results provide support for the utility of the scaled Moreau envelope as a statistical smoother by interpreting scaled proximal descent as a quasi-Newton method applied to the scaled Moreau envelope.
Charla 2:
Abstract: Splitting methods are a useful tool for finding zeros in the sum of monotone operators by handling separately each one of the operators involved in the problem. There is a wide variety of splitting algorithms for tackling these problems with a small number of operators, but if the number of monotone operators in the problem increases one generally is restricted to the use of product space reformulations. These techniques have the inconvenient that when applying them, the “dimension” of the algorithm grows bigger than expected, which usually leads to a worse performance of the method. Recently different schemes have been proposed which manage to reduce this “dimension”, also called lifting, and even proof that this reduction is minimal according to some rules. In this talk, we present the first splitting algorithm with minimal lifting to handle a special case of monotone inclusions in which some of the monotone operators are composed with linear operators.
Ver PDF Adjunto.
Venue: Sala de Seminario John Von Neuman, CMM, Beauchef 851, Torre Norte, Piso 7.
Speaker: Julio Deride & David Torregrosa-Belén
Affiliation: Universidad Técnica Federico Santa María, Chile & Universidad de Alicante, Spain
Coordinator: Emilio Vilches
Posted on Nov 21, 2022 in Optimization and Equilibrium, Seminars



Noticias en español
