Meta learning is an exciting research trend in machine learning, which enables a model to understand the learning process. Unlike other ML paradigms, with meta learning you can learn from small datasets faster.
Hands-On Meta Learning with Python starts by explaining the fundamentals of meta learning and helps you understand the concept of learning to learn. You will delve into various one-shot learning algorithms, like siamese, prototypical, relation and memory-augmented networks by implementing them in TensorFlow and Keras. As you make your way through the book, you will dive into state-of-the-art meta learning algorithms such as MAML, Reptile, and CAML. You will then explore how to learn quickly with Meta-SGD and discover how you can perform unsupervised learning using meta learning with CACTUs. In the concluding chapters, you will work through recent trends in meta learning such as adversarial meta learning, task agnostic meta learning, and meta imitation learning.
By the end of this book, you will be familiar with state-of-the-art meta learning algorithms and able to enable human-like cognition for your machine learning models.
Check out my Deep Reinforcement Learning Repo here.
Check the curated list of Meta Learning papers, code, books, blogs, videos, datasets and other resources here.
- 1.1. What is Meta Learning?
- 1.2. Meta Learning and Few-Shot
- 1.3. Types of Meta Learning
- 1.4. Learning to Learn Gradient Descent by Gradient Descent
- 1.5. Optimization As a Model for Few-Shot Learning
- 2.1. What are Siamese Networks?
- 2.2. Architecture of Siamese Networks
- 2.3. Applications of Siamese Networks
- 2.4. Face Recognition Using Siamese Networks
- 2.5. Audio Recognition Using Siamese Networks
- 3.1. Prototypical Network
- 3.2. Algorithm of Prototypical Network
- 3.3. Omniglot character set classification using prototypical network
- 3.4. Gaussian Prototypical Network
- 3.5. Algorithm
- 3.6. Semi prototypical Network
- 4.1. Relation Networks
- 4.2. Relation Networks in One-Shot Learning
- 4.3. Relation Networks in Few-Shot Learning
- 4.4. Relation Networks in Zero-Shot Learning
- 4.5. Building Relation Networks using Tensorflow
- 4.6. Matching Networks
- 4.7. Embedding Functions
- 4.8. Architecture of Matching Networks
- 4.9. Matching Networks in Tensorflow
- 5.1. Neural Turing Machine
- 5.2. Reading and Writing in NTM
- 5.3. Addressing Mechansims
- 5.4. Copy Task using NTM
- 5.5. Memory Augmented Neural Networks
- 5.6. Reading and Writing in MANN
- 5.7. Building MANN in Tensorflow
- 6.1. Model Agnostic Meta Learning
- 6.2. MAML Algorithm
- 6.3. MAML in Supervised Learning
- 6.4. MAML in Reinforcement Learning
- 6.5. Building MAML from Scratch
- 6.6. Adversarial Meta Learning
- 6.7. Building ADML from Scratch
- 6.8. CAML
- 6.9. CAML Algorithm
- 7.1. Meta-SGD
- 7.2. Meta-SGD in Supervised Learning
- 7.3. Meta-SGD in Reinforcement Learning
- 7.4. Building Meta-SGD from Scratch
- 7.5. Reptile
- 7.6. Reptile Algorithm
- 7.7. Sine Wave Regression Using Reptile
- 8.1. Gradient Agreement
- 8.2. Weight Calculation
- 8.3. Gradient Agreement Algorithm
- 8.4. Building Gradient Agreement with MAML from scratch
- 9.1. Task Agnostic Meta Learning
- 9.2. TAML Algorithm
- 9.3. Meta Imitation Learning
- 9.4. MIL Algorithm
- 9.5. CACTUs
- 9.6. Task Generation using CACTUs
- 9.7. Learning to Learn in the Concept Space