Deep Learning Seminar 2018

From Chrome
Revision as of 19:02, 16 April 2018 by Matteo (Talk | contribs) (Created page with "Deep Learning Seminars 2018 Politecnico di Milano 23 February 2018 Luigi Malagò Variational AutoEncoder: An Introduction and Recent Perspectives Abstract: Variational Auto...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Deep Learning Seminars 2018 Politecnico di Milano 23 February 2018

Luigi Malagò

Variational AutoEncoder: An Introduction and Recent Perspectives

Abstract: Variational AutoEncoders are generative models which consist of two neural networks. The first one is an encoder, whose purpose is to map the inputs to the parameters of a probability density function over the latent space, the second one in cascade is a decoder, which maps latent variables to probability density functions over the observation space. Variational AutoEncoders are usually trained using variational inference approaches, in particular by maximizing a lower-bound for the log-likelihood of the model, since training the model directly by optimizing the log-likelihood, is not computationally efficient. The current research in this field covers different directions, among which we have the use of richer models over the latent variables and the definition of sharper bounds for the loss function. In this presentation we focus our attention on the characterization of the geometrical properties of the statistical model for the latent variables learned during training. In our work we are interested is exploiting the intrinsic properties of the latent model, to evaluate the impact of the hyper-parameters of a Variational AutoEncoder over the learned geometry, and at the same time get insights for the design of more robust and efficient training procedures.