TheSequence

Share this post

🐍🦎 Edge#73: Meta-Learning and AutoML, OpenAI’s Reptile Model, and the Auto-Keras Framework

thesequence.substack.com

Discover more from TheSequence

The best source to stay up-to-date with the developments in the machine learning, artificial intelligence, and data science world. Trusted by 144,485 professionals from the main AI labs, universities, and enterprises
Over 164,000 subscribers
Continue reading
Sign in

🐍🦎 Edge#73: Meta-Learning and AutoML, OpenAI’s Reptile Model, and the Auto-Keras Framework

Mar 23, 2021
20
Share this post

🐍🦎 Edge#73: Meta-Learning and AutoML, OpenAI’s Reptile Model, and the Auto-Keras Framework

thesequence.substack.com
Share

✨ You’re on the free list for our news digest TheSequence Scope. We sent you TheSequence Edge so you can keep up with the topics we cover. For the full experience, become a Premium subscriber. It gives you a wider view and an important edge of knowing the most relevant things about AI and ML.

Subscribe

Participate in today’s quiz.


💡 ML Concept of the Day: Meta-Learning and AutoML

In the final issue of our series about AutoML, we would like to discuss the perspective of meta-learning as a form of AutoML. This position is a bit controversial because, conceptually, meta-learning encompasses a bigger universe of techniques focused on “learning to learn.” However, many meta-learning methods end up generating new machine learning models for a given task, which is a clear definition of AutoML. 

Conceptually, meta-learning typically refers to the ability of a model to improve the learning of sophisticated tasks by reusing knowledge learned in previous tasks. That level of knowledge acquisition and reusability can be foundational in many AutoML methods. While there is a very broad set of meta-learning methods, we can identify three main forms that are relevant to AutoML: Meta-Learning Methods Based on Model Evaluations, Meta-Learning Methods Based on Task Properties, Meta-Learning Methods Based on Previous Models

->let's unfold it


🔎 ML Research You Should Know: OpenAI’s Reptile is One of the Most Efficient Meta-Learning Methods Ever Created 

In their paper, researchers from OpenAI propose a super clever meta-learning algorithm called Reptile that can quickly learn new tasks from a given distribution. 

When I first read about Reptile, I was surprised that this method worked at all. At first glance, Reptile seems like the type of meta-learning model that could only work in problems where zero-shot learning is possible, as it basically relies on performing SGD on a mixture of tasks. However… ->continue reading to stay in the know of the most relevant AI&ML papers


🤖 ML Technology to Follow: Auto-Keras is an AutoML Framework Every Data Scientists Should Know

Auto-Keras is one of the simplest and most widely-used AutoML libraries in the data science space. 

Differently from other AutoML frameworks in the market, Auto-Keras focuses on deep learning tasks. Its architecture is based on five fundamental components –>learn more


🧠 The Quiz

Now, to our regular quiz. We reward the winners every ten quizzes. The questions are the following:

  1. Which of the following statements best describes OpenAI’s Reptile meta-learning algorithm?

  2. What is the main innovation of Auto-Keras compared to other NAS or AutoML stacks?

Check your knowledge

That was fun! 👏 Thank you. 


TheSequence is a summary of groundbreaking ML research papers, engaging explanations of ML concepts, and exploration of new ML frameworks and platforms, followed by 80,000+ specialists from top AI labs and companies of the world.

Join them

20
Share this post

🐍🦎 Edge#73: Meta-Learning and AutoML, OpenAI’s Reptile Model, and the Auto-Keras Framework

thesequence.substack.com
Share
Top
New
Community

No posts

Ready for more?

© 2023 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing