Difference between revisions of "Deep Learning Seminar 2018"

From Chrome
Jump to: navigation, search
(Seminars Aims & Organization)
(Seminars Aims & Organization)
 
(3 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
Deep Learning is nowadays becoming the dominant approach in learning cognitive systems which are nowdays able to recognize patterns in data (e.g., images, text, and sounds) or to perform end to end learning of complex behaviors.  
 
Deep Learning is nowadays becoming the dominant approach in learning cognitive systems which are nowdays able to recognize patterns in data (e.g., images, text, and sounds) or to perform end to end learning of complex behaviors.  
  
In this first edition of the ''Deep Learning Seminars'', held at Politecnico di Milano on the 23rd of February 2018,  
+
In this first edition of the ''Deep Learning Seminars'', held at Politecnico di Milano on the 23rd of February 2018, three invited speakers will present their research on new trends on learning in non Euclidean spaces.
  
 +
<hr>
  
Speaker: '''Luigi Malagò''', Machine Learning and Optimization Group at the Romanian Institute of Science and Technology (RIST)
+
'''Speaker''': Luigi Malagò, Machine Learning and Optimization Group at the Romanian Institute of Science and Technology (RIST)
  
Title: Variational AutoEncoder: An Introduction and Recent Perspectives
+
'''Title''': Variational AutoEncoder: An Introduction and Recent Perspectives
  
Abstract:
+
'''Abstract''':
 
Variational AutoEncoders are generative models which consist of two neural networks. The first one is an encoder, whose purpose is to map the inputs to the parameters of a probability density function over the latent space, the second one in cascade is a decoder, which maps latent variables to probability density functions over the observation space. Variational AutoEncoders are usually trained using variational inference approaches, in particular by maximizing a lower-bound for the log-likelihood of the model, since training the model directly by optimizing the log-likelihood, is not computationally efficient. The current research in this field covers different directions, among which we have the use of richer models over the latent variables and the definition of sharper bounds for the loss function. In this presentation we focus our attention on the characterization of the geometrical properties of the statistical model for the latent variables learned during training. In our work we are interested is exploiting the intrinsic properties of the latent model, to evaluate the impact of the hyper-parameters of a Variational AutoEncoder over the learned geometry, and at the same time get insights for the design of more robust and efficient training procedures.
 
Variational AutoEncoders are generative models which consist of two neural networks. The first one is an encoder, whose purpose is to map the inputs to the parameters of a probability density function over the latent space, the second one in cascade is a decoder, which maps latent variables to probability density functions over the observation space. Variational AutoEncoders are usually trained using variational inference approaches, in particular by maximizing a lower-bound for the log-likelihood of the model, since training the model directly by optimizing the log-likelihood, is not computationally efficient. The current research in this field covers different directions, among which we have the use of richer models over the latent variables and the definition of sharper bounds for the loss function. In this presentation we focus our attention on the characterization of the geometrical properties of the statistical model for the latent variables learned during training. In our work we are interested is exploiting the intrinsic properties of the latent model, to evaluate the impact of the hyper-parameters of a Variational AutoEncoder over the learned geometry, and at the same time get insights for the design of more robust and efficient training procedures.
 +
 +
<hr>
 +
 +
'''Speaker''': Jonathan Masci, NNAISENSE
 +
 +
'''Title''': Deep Learning on Graphs and Manifolds
 +
 +
'''Abstract''': Deep Learning approaches (e.g. Convolutional Neural Networks and Recurrent Neural Networks) allowed to achieve unprecedented performance on a broad range of problems coming from a variety of different fields (e.g., Computer Vision and Speech Recognition). Despite the results obtained, research has mainly focused so far on data defined on Euclidean domains (i.e., 1D or 2D grids). Nonetheless, in a multitude of different fields, one may have to deal with data defined on non-Euclidean domains (i.e., graphs and manifolds). The adoption of Deep Learning in these particular fields has been lagging behind until very recently, primarily since the non-Euclidean nature of data makes the definition of basic operations (such as convolution) rather elusive. An overview of methods for Deep learning on Graphs and Manifolds will be given with a focus on 3D shape analysis and classification.
 +
 +
<hr>
 +
 +
'''Speaker''': Francesco Visin, DeepMind
 +
 +
'''Title''': Graph Networks
 +
 +
'''Abstract''': Graphs network are an emerging paradigm to deal with graph structured data such as social networks or 3D mesh classification. Quite a few models have been recently propose in the literature to train and make inference on graphs, but no common framework has been proposed to provide a general perspective on the topic. In this talk a general model comprising several models of Graph Networks is presented and discussed.

Latest revision as of 19:19, 16 April 2018

Seminars Aims & Organization

Deep Learning is nowadays becoming the dominant approach in learning cognitive systems which are nowdays able to recognize patterns in data (e.g., images, text, and sounds) or to perform end to end learning of complex behaviors.

In this first edition of the Deep Learning Seminars, held at Politecnico di Milano on the 23rd of February 2018, three invited speakers will present their research on new trends on learning in non Euclidean spaces.


Speaker: Luigi Malagò, Machine Learning and Optimization Group at the Romanian Institute of Science and Technology (RIST)

Title: Variational AutoEncoder: An Introduction and Recent Perspectives

Abstract: Variational AutoEncoders are generative models which consist of two neural networks. The first one is an encoder, whose purpose is to map the inputs to the parameters of a probability density function over the latent space, the second one in cascade is a decoder, which maps latent variables to probability density functions over the observation space. Variational AutoEncoders are usually trained using variational inference approaches, in particular by maximizing a lower-bound for the log-likelihood of the model, since training the model directly by optimizing the log-likelihood, is not computationally efficient. The current research in this field covers different directions, among which we have the use of richer models over the latent variables and the definition of sharper bounds for the loss function. In this presentation we focus our attention on the characterization of the geometrical properties of the statistical model for the latent variables learned during training. In our work we are interested is exploiting the intrinsic properties of the latent model, to evaluate the impact of the hyper-parameters of a Variational AutoEncoder over the learned geometry, and at the same time get insights for the design of more robust and efficient training procedures.


Speaker: Jonathan Masci, NNAISENSE

Title: Deep Learning on Graphs and Manifolds

Abstract: Deep Learning approaches (e.g. Convolutional Neural Networks and Recurrent Neural Networks) allowed to achieve unprecedented performance on a broad range of problems coming from a variety of different fields (e.g., Computer Vision and Speech Recognition). Despite the results obtained, research has mainly focused so far on data defined on Euclidean domains (i.e., 1D or 2D grids). Nonetheless, in a multitude of different fields, one may have to deal with data defined on non-Euclidean domains (i.e., graphs and manifolds). The adoption of Deep Learning in these particular fields has been lagging behind until very recently, primarily since the non-Euclidean nature of data makes the definition of basic operations (such as convolution) rather elusive. An overview of methods for Deep learning on Graphs and Manifolds will be given with a focus on 3D shape analysis and classification.


Speaker: Francesco Visin, DeepMind

Title: Graph Networks

Abstract: Graphs network are an emerging paradigm to deal with graph structured data such as social networks or 3D mesh classification. Quite a few models have been recently propose in the literature to train and make inference on graphs, but no common framework has been proposed to provide a general perspective on the topic. In this talk a general model comprising several models of Graph Networks is presented and discussed.