Seminars

Seminars appear in decreasing order in relation to date. To find an activity of your interest just go down on the list. Normally seminars are given in english. If not, they will be marked as Spanish Only.

 

Quantitative multiple recurrence for two and three transformations.

Event Date: Apr 10, 2017 in Dynamical Systems, Seminars

Abstract:  In this talk I will provide some counter examples for quantitative multiple recurrence problems for systems with more than one transformation.  For instance, I will show that there exists an ergodic system $(X,\mathcal{X},\mu,T_1,T_2)$ with two commuting transformations such that for every $\ell < 4$ there exists $A\in \mathcal{X}$ such that  \[ \mu(A\cap T_1^n A\cap T_2^n A) < \mu(A)^{\ell} \]  for every $n \in \mathbb{N}$.   The construction of such a system is based on the study of “big” subsets of...

Recurrences for generating polynomials

Event Date: Apr 07, 2017 in Discrete Mathematics, Seminars

Abstract:   In this talk we consider sequences of polynomials that satisfy differential–difference recurrences. Our interest is motivated by the fact that polynomials satisfying such recurrences frequently appear as generating polynomials of integer valued random variables that are of interest in discrete mathematics. It is, therefore, of interest to understand the properties of such polynomials and their probabilistic consequences. We will be primarily interested in the limiting distribution of the corresponding random variables....

Stability of Hamiltonian systems which are close to integrable : introduction to KAM and Nekhoroshev theory

Event Date: Mar 29, 2017 in Optimization and Equilibrium, Seminars

Abstract: We give a panorama of classical theories of stability of Hamiltonian systems close to integrable which are of two kind : – Stability in measure over infinite time (KAM theory). – Effective stability over finite but very long time (Nekhoroshev theory)

Limit distributions related to the Euler discretization error of Brownian motion about random times

Event Date: Mar 28, 2017 in Núcleo Modelos Estocásticos de Sistemas Complejos y Desordenados, Seminars

Resumen: In this talk we study the simulation of barrier-hitting events and extreme events of one-dimensional Brownian motion. We call “barrier-hitting event” an event where the Brownian motion hits for the first time a deterministic “barrier” function; and call “extreme event” an event where the Brownian motion attains a minimum on a given compact time interval or unbounded closed time interval. To sample these events we consider the Euler discretization approach of Brownian motion; that is, simulate the...

“DASH: Deep Learning for the Automated Spectral Classification of Supernovae”

Event Date: Jan 27, 2017 in Other Areas, Seminars

ABSTRACT:   We have reached a new era of ‘big data’ in astronomy with surveys now recording an unprecedented number of spectra. In particular, new telescopes such as LSST will soon incease the spectral catalogue by a few orders of magnitude. Moreover, the Australian sector of the Dark Energy Survey (DES) is currently in the process of spectroscopically measuring several thousands of supernovae. To meet this new demand, novel approaches that are able to automate and speed up the classification process of these spectra is...

Yaglom limits can depend on the initial state

Event Date: Jan 16, 2017 in Seminars, Stochastic Modeling

Abstract:   To quote the economist John Maynard Keynes: “The long run is a misleading guide to current affairs. In the long run we are all dead.” It makes more sense to study the state of an evanescent system given it has not yet expired. For a substochastic Markov chain with kernel K on a state space S with killing this amounts to the study of of the Yaglom limit; that is the limiting probability the state at time n is y given the chain has not been absorbed; i.e. lim_{n\to\infty}K^n(x,y)/K^n(x,S).   We given an example...

Provably efficient high dimensional feature extraction

Event Date: Dec 28, 2016 in Discrete Mathematics, Seminars

Abstract: The goal of inference is to extract information from data. A basic building block in high dimensional inference is feature extraction, that is, to compute functionals of given data that represent it in a way that highlights some underlying structure. For example, Principal Component Analysis is an algorithm that finds a basis to represent data that highlights the property of data being close to a low-dimensional subspace. A fundamental challenge in high dimensional inference is the design of algorithms that are provably efficient...

Provably efficient high dimensional feature extraction

Event Date: Dec 28, 2016 in Optimization and Equilibrium, Seminars

Abstract: The goal of inference is to extract information from data. A basic building block in high dimensional inference is feature extraction, that is, to compute functionals of given data that represent it in a way that highlights some underlying structure. For example, Principal Component Analysis is an algorithm that finds a basis to represent data that highlights the property of data being close to a low-dimensional subspace. A fundamental challenge in high dimensional inference is the design of algorithms that are provably efficient...

The group of reversible Turing machines and the torsion problem for $\Aut(A^{\mathbb{Z})$ and related topological fullgroups.

Event Date: Dec 19, 2016 in Dynamical Systems, Seminars

Abstract:   We introduce the group $RTM(G,n,k)$ composed of abstract Turing machines which use the group $G$ as a tape, use an alphabet of $n$ symbols, $k$ states and act as a bijection on the set of configurations. These objects can be represented both as cellular automata and in terms of continous functions and cocycles. The study of this group structure yields interesting results concerning computability properties of some well studied groups such as $\Aut(A^{\mathbb{Z})$ and the topological full group of the two dimensional full shift....

Iterative regularization via a dual diagonal descent method

Event Date: Dec 14, 2016 in Optimization and Equilibrium, Seminars

   Abstract: In the context of linear inverse problems, we propose and study general iterative regularization method allowing to consider classes of regularizers and data-fit terms. The algorithm we propose is based on a primal-dual diagonal descent method, designed to solve hierarchical optimization problems. Our analysis establishes convergence as well as stability results, in presence of error in the data. In this noisy case, the number of iterations is shown to act as a regularization parameter, which makes our algorithm an iterative...