💪🏻 AutoML recap
AutoML recap is a collection of ten issues where we covered the evolution of the Automated Machine Learning (AutoML) space, the most relevant concepts, technologies, and research papers.
This complete mini-series will allow you to familiarize yourself with the topic and reinforce your knowledge about such important parts of AutoML as NAS, meta-learning, and hyperparameter optimization.
Subscribe, read and save this collection as you might want to come back to it a few times later.
💡 Understanding AutoML and its Different Disciplines
The fascination with AutoML is rooted in this idea of using machine learning to create better machine learning models. For some, the term AutoML seems to be a bit broad, intersecting with several areas of ML. It’s possible to find many “loose definitions” of AutoML that might cause some confusion. It is important to realize that AutoML is not one method but a large collection of techniques that encompass several machine learning schools. In general, most AutoML methods fall into some of the following categories:
Hyperparameter Optimization (HPO): Methods that find the best combination of hyperparameters for a machine learning architecture.
Neural Architecture Search (NAS): Methods that focus on finding the best machine learning architecture for a given problem.
Meta-Learning: Methods that focus on automated learning based on prior experience with other tasks.
In Edge#61 (read without subscription), we provide an overview of the original AutoML paper and show how Amazon AutoGluon Brings Deep Learning to AutoML.
Hyperparameter Optimization (HPO)
Think about the process of learning to play the guitar. With every new melody, we are constantly tuning different aspects of the guitar to make it sound better.
In ML, those aspects that you can fine-tune are known as hyperparameters.
Conceptually, a hyperparameter is a parameter that helps control the learning process. They differ from other parameters in the fact that they can’t be directly inferred from the training process. In that sense, hyperparameters are applied before the algorithm is trained and they govern the entire training process. Edge#1 covers the basic concept as well as the Lottery Ticket Hypothesis; and Weights and Biases – one of the top platforms in the market that enables the hyperparameter optimization of ML models.
In Edge#63, we discuss two of the most widely adopted HPO methods: random search and grid search, which both fall under the umbrella of black-box optimization problems. We also explore H2O AutoML and look into how DeepMind and Waymo use AutoML to train self-driving cars.
Edge#65 is about Bayesian HPO. By choosing hyperparameters using a probabilistic model, it tends to optimize the search for areas with the most promising validation scores. In this issue, we also speak about how Amazon uses AutoML for the entire lifecycle of ML models, and explore Azure AutoML as one of the simplest and most robust AutoML platforms in the market.
Neural Architecture Search (NAS)
NAS is a machine learning technique for automating the creation of neural networks. While Edge#4 offers the intro to the concept and review of the original paper about NAS, Edge#67 goes deeper dissecting NAS in the context of AutoML. While the vast majority of implementations of AutoML are based on hyperparameter optimization (HPO) methods over existing architectures, NAS focuses on the discovery of new architectures. In Edge#67, we also discuss Project Petridish, a new type of NAS algorithm; and Microsoft’s Archai – an open-source NAS Framework.
Edge#69 develops the topic and introduces search strategies in NAS; we also explore Google’s evolved transformer that is a killer combination of transformers and NAS; and discuss Microsoft’s neural network intelligence – the most impressive AutoML framework you have never heard of. Finally, Edge#71 covers Differentiable Architecture Search (DARTS), an emerging search strategy within the NAS space. We also learn about how Facebook-Berkeley-Nets (FBNet) use NAS to produce efficient CNNs; and dive into Google’s AdaNet – a lightweight AutoML framework for TensorFlow.
Meta-Learning
Meta-learning is one of those paradigms that has gotten lots of attention in recent years. It focuses on a fascinating idea: learning to learn. It typically refers to the ability of a model to improve the learning of sophisticated tasks by reusing knowledge learned in previous tasks. Edge#11 is a full dive into the meta-learning universe with an overview of Model-Agnostic Meta-Learning (MAML) for Fast Adaptation of Deep Networks paper by Berkeley AI Research Lab. The MAML paper is considered one of the most influential papers in the history of meta-learning.
The final issue of this mini-series about AutoML, Edge#73, discusses the controversy of meta-learning as a form of AutoML; introduces OpenAI’s Reptile model for efficient meta-learning and Auto-Keras framework, one of the simplest and most widely-used AutoML libraries in the data science space.
We hope you enjoyed this recap. Feel free to share it with those who can benefit from reading it.
Reading TheSequence Edge regularly, you become smarter about ML and AI. Trusted by the major AI Labs and universities of the world.