### Harsha Kokel

*Model-Agnostic Meta-Learning* for Fast Adaptation of Deep Networks

### Chelsea Finn, Pieter Abbeel, Sergey Levine, ICML 2017.

Meta-Learning a.k.a the "Learning to Learn" problem, is the field of study where the researchers are trying to learn the parts of model which in standard machine learning setting are decided by researchers/humans/users. To elaborate, consider for example a standard gradient based machine learning problem. Given a training data and test data, to solve a problem the researches first decide what loss function to optimize and based on existing literature or their expertise they figure out various meta-information of the model. In the figure below, for a standard gradient based machine learning model meta-information like network structure, initialization parameters (), update method etc are all decided manually.

Meta-learning research aims to learn a model which can help decide such meta-information (all or subset) for any new task.

One of the use-case of **meta-learning** is in a field called few-shot learning. In **few-shot learning**, machine learning algorithm is supposed to learn a model for a task from few supervised examples. Meta-Learning can help in few-shot learning by providing a better initialization parameters. **Few-shot learning is the problem of learning a model from few examples, meta learning is the problem of learning a model that can easily adapt to the new task from few examples**.

This is also the premise of the Model-Agnostic Meta-Learning (MAML) paper by Finn et al 2017.

**Transfer-Learning** is a research problem in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem^{1}. For Deep Neural Network models, one of the the popular approaches to transfer-learning is by using a **pre-trained model**. Pre-trained model essentially transfers the knowledge of network parameters between different tasks. This is essentially equivalent to providing initialization parameters for the new task. **Transfer-learning via pre-trained model as well as meta-learning use the network parameters of one model as initialization parameters for another model. The difference is in the optimization of the network parameters. While pre-trained model are optimize for some predefined task, meta-learning model are optimized so that they can adapt to new tasks quickly.**

The key idea of Model-Agnostic Meta-Learning (MAML) algorithm is to **optimize model which can adapt to new task** quickly. Consider (pretty similar) tasks with optimal parameters . Say, for we have only supervised examples but we have large number of supervised examples for rest of the tasks i.e. and .

A **transfer learning** approach will train different models (with parameters and ). Try all three as pretrained model for , compare the performance and pick one that works the best i.e. closest to .

**MAML** on the other hand uses tasks and for meta-training and treats them same as task i.e. only uses example from each task. MAML learns a single model with parameter in meta-training such that for each task the gradient step using examples from that parameter in the direction of reaches a . The meta-training objective to bring close to . So, the next update of the parameter is a gradient step in a direction calculated as a linear combination of gradient step from to . This is represented in the figure below, albeit not visibly.

Meta-learning (bold line: **—**) is performing a search in parameter space such that a gradient step (gray line: →) for any of the training tasks is close to optimal parameters . The parameter is then used as initialization value and fine-tuned for a specific task, this is called **learning** or **adaptation** (broken line: - - -).

During meta-training, MAML adapts the parameter for training tasks to compute the update. In meta-testing, MAML adapts the parameter for test task . We obtain by taking gradient step using the examples.

Parameter is computed for any task using following fine-tuning/learning/adaptation equation.

Meta-learning aims to reduce the distance between and . Since, is unknown it instead tries to minimize for all the tasks. So, meta-objective is:

Note that we restrict our model to minimize the objective of tasks from a distribution .

Meta-optimization is hence done with following update equation:

Notice that the update equation above depends on the gradient of loss function , but depends on the gradient of loss function . So evidently MAML involves second level gradients.

Full algorithm of MAML is quite easy to follow from the above three equations.

Instead of doing the search of for all the tasks in training, like we did in the example above. MAML samples tasks from distribution . In theory, this might just be a distribution of task based on available sample size of each task or distribution based on the similarity to the test task. In practice, they randomly sampled label set from images corpus and then sampled few examples for training and few for testing. To update the parameters in line 8, code computes the second gradient using tensor flow optimizers.

Below figure explains the MAML update equation used in practice. The first arrow for each task is the gradient from fine-tuning equation and the second arrow is from the meta-optimization equation.

#### MAML vs Pretrained

The above image highlights the difference between MAML and pretrained models for the MAML-RL in 2D Navigation task. While the MAML model can adapt to the new task quickly, the pre-trained models take longer.

### First-order MAML

Since the MAML involves second-order derivative, it can be computationally expensive. Authors propose a first-order approximation for such scenarios, by omitting the second order derivatives.

Since,

In **first-order MAML**, authors use

### Reptile

Reptile further simplifies the gradient computation of MAML by proposing following algorithm.

Notice that the initial parameter used in MAML is equivalent to in the reptile algorithm.

Instead of computing the for each task with one step gradient, Reptile computes parameter by running stochastic gradient descent for steps. Then instead of computing the gradient w.r.t the task for updating the initial parameter (as done in line 8 of MAML), Reptile recommend to just shift the initial parameter in the direction of by using as gradient. Below figure explains the Reptile update process.

### References

- Definition of transfer learning from Wikipedia
- ICML 2019 Meta leraning tutorial
- Prof. Hung-yi Lee's slides on meta learning
- Alex Nichol, Joshua Achiam, John Schulman
*On First-Order Meta-Learning Algorithms*2018 – Reptile paper - Reptile Blog
- Paper repro: Deep Metalearning using "MAML" and "Reptile"