Non-Euclidean Differentially Private Stochastic Convex Optimization.

Abstract:  Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the goal is to approximately minimize the population risk with respect to a convex loss function, given a dataset of i.i.d. samples from a distribution, while satisfying differential privacy with respect to the dataset. Most of the existing works in the literature of private convex optimization focus on the Euclidean (i.e., $\ell_2$) setting, where the loss is assumed to be Lipschitz (and possibly smooth) w.r.t.~the $\ell_2$ norm over a constraint set with bounded $\ell_2$ diameter. Algorithms based on noisy stochastic gradient descent (SGD) are known to attain the optimal excess risk in this setting.

In this talk I will provide a brief overview of differential privacy and its applications in stochastic convex optimization, together with new upper and lower bounds on the population risk for DP-SCO in non-Euclidean setups, in particular where the reference norm is the $\ell_p$-norm, $1\leq p\leq \infty$. Our algorithms include DP versions of variance-reduced stochastic Frank-Wolfe and mirror-descent methods, together with a novel privacy mechanism, which generalizes the Gaussian mechanism. Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.

Based on joint work with Raef Bassily and Anupama Nandi, and available at arXiv:2103.01278

Date: May 05, 2021 at 14:30:00 h
Venue: Modalidad Vía Online.
Speaker: Cristóbal Guzmán
Affiliation: University of Twente, Países Bajos.
Coordinator: José Verschae
More info at:
Event website
Abstract:
PDF

Posted on May 3, 2021 in ACGO, Seminars