Abstract: The image of a rotated cat still represents a cat: while this simple rule seems obvious to a human, it is not obvious to neural networks, which separately “learn” each new rotation of the same image. This applies to different groups of symmetries for images, graphs, texts, and other types of data. Implementing “equivariant” neural networks that respect symmetries, reduces the number of learned parameters, and helps improve their generalization properties outside the training set. On the other hand, in networks that “identify too much”, that is, where we impose too many symmetries, the error begins to increase, due to not respecting the data. In work with S. Trivedi, (NeurIPS 2023), we quantify this tradeoff, which allows to define the optimal amount of symmetries in learning models. I will give an introduction to classical learning theory bounds, and our extension of the ideas to the study of “partial/approximate equivariance”. In passing, I’ll describe some possible directions for working with partial symmetries in specific tasks.
Venue: Sala de Seminario John Von Neumann, CMM, Beauchef 851, Torre Norte, Piso 7.
Speaker: Mircea Petrache
Affiliation: PUC
Coordinator: María Eugenia Martínez
Posted on Nov 20, 2023 in Differential Equations, Seminars