Difference between revisions of "Meta Learning Seminar"
(One intermediate revision by the same user not shown) | |||
Line 4: | Line 4: | ||
'''Abstract:''' The natural gradient, first introduced by Amari (1998), allows to train neural networks by explicitly taking into account the non-Euclidean geometry of (conditional) statistical models associated to neural networks. The computation of the natural gradient requires the inversion of the Fisher matrix, which poses limitations to this adoption in training large networks, unless some approximations are introduced, in order to reduce the computational cost. Meta-learned gradient preconditioning is an approach in meta-leaning in which the gradient of a neural network is preconditioned based on the task. For instance preconditioning can be obtained through another network which processes the available information about the task to be learned, e.g., a small set of images associated to the labels of the classification task. In general, natural gradient as well as second-order methods in optimization can be considered as specific instances of gradient preconditioning, exploiting either information about the geometry of the space (e.g., the natural gradient) or about the function to be optimized (e.g., the Newton method). In this presentation we discuss about the use of natural gradient in meta-learning and we propose a combined framework for meta-learning natural gradient preconditioning. | '''Abstract:''' The natural gradient, first introduced by Amari (1998), allows to train neural networks by explicitly taking into account the non-Euclidean geometry of (conditional) statistical models associated to neural networks. The computation of the natural gradient requires the inversion of the Fisher matrix, which poses limitations to this adoption in training large networks, unless some approximations are introduced, in order to reduce the computational cost. Meta-learned gradient preconditioning is an approach in meta-leaning in which the gradient of a neural network is preconditioned based on the task. For instance preconditioning can be obtained through another network which processes the available information about the task to be learned, e.g., a small set of images associated to the labels of the classification task. In general, natural gradient as well as second-order methods in optimization can be considered as specific instances of gradient preconditioning, exploiting either information about the geometry of the space (e.g., the natural gradient) or about the function to be optimized (e.g., the Newton method). In this presentation we discuss about the use of natural gradient in meta-learning and we propose a combined framework for meta-learning natural gradient preconditioning. | ||
+ | |||
+ | '''Date and time:''' 10/01/2020 at 14:30 | ||
+ | |||
+ | '''Room:''' Seminar Room, Department of Electronics, Information and Bioengineering (DEIB) |
Latest revision as of 20:29, 23 January 2020
Recent Advancements on Natural Gradient: Meta-Learned Gradient Preconditioning
Speaker: Luigi Malago', Romanian Institute of Science and Technology - RIST
Abstract: The natural gradient, first introduced by Amari (1998), allows to train neural networks by explicitly taking into account the non-Euclidean geometry of (conditional) statistical models associated to neural networks. The computation of the natural gradient requires the inversion of the Fisher matrix, which poses limitations to this adoption in training large networks, unless some approximations are introduced, in order to reduce the computational cost. Meta-learned gradient preconditioning is an approach in meta-leaning in which the gradient of a neural network is preconditioned based on the task. For instance preconditioning can be obtained through another network which processes the available information about the task to be learned, e.g., a small set of images associated to the labels of the classification task. In general, natural gradient as well as second-order methods in optimization can be considered as specific instances of gradient preconditioning, exploiting either information about the geometry of the space (e.g., the natural gradient) or about the function to be optimized (e.g., the Newton method). In this presentation we discuss about the use of natural gradient in meta-learning and we propose a combined framework for meta-learning natural gradient preconditioning.
Date and time: 10/01/2020 at 14:30
Room: Seminar Room, Department of Electronics, Information and Bioengineering (DEIB)