🤓 Self-Supervised Learning Recap
As requested by our readers, before diving into new series dedicated to MLOps, we put together a recap of the Self-Supervised Learning (SSL) series. As a proverb says: Repetition is the mother of learning ;) For students, professors, and researchers with .edu in their email, we offer 30% off (it’s only $35/year). The offer ends on November 10.
Let’s have some useful intro about the whole category first:
💡What is SSL?
In recent years, SSL has gone from being an obscure research area to powering mission-critical solutions at places like Facebook. The social media giant has slowly become the top research lab in SSL, pushing the boundaries of this new type of models. In their recent blog post, Facebook AI Research (FAIR) described SSL as “the dark matter of artificial intelligence (AI).” The analogy reflects FAIR’s belief that SSL can help unlock the next frontiers of AI. In general, SSL tries to address some of the limitations of traditional supervised learning methods to depend on large amounts of labeled data which are unobtainable for many tasks. Additionally, supervised methods need to be retrained on every task from the ground up and struggle to reuse knowledge from one task to another. SSL looks to address some of those challenges by emulating some of the common sense capabilities of the human brain that help us learn new tasks with very little supervision.
Like many other AI, the SSL analogy draws inspiration from the learning process in babies. Even in their first days of life, babies start developing a world representation by simply observing it. These representations are solidified later when they get to take actions on physical objects and evaluate the corresponding reactions, but, in the beginning, observation is key. SSL trains models in vast amounts of largely unlabeled data and expects them to develop representations and master different tasks related to the target dataset. SSL combines ideas from energy-based models, contrastive learning, and many more. Currently, SSL has interesting successes in the language area, computer vision, and speech analysis. Dive into our series to learn more.
The recap requires a Premium subscription. Forward this email to those who might benefit from reading it, you can also give it as a gift.
🌌 Edge#123: Self-supervised learning and why it was called “the dark matter of artificial intelligence (AI)”; +VISSL, a framework for SSL in computer vision.
⚡️ Edge#125: SSL as an energy-based method; Wav2vec, a SSL method for speech recognition; +Lightly, a python library for SSL on images.
🐼 Edge#127: SSL as contrastive learning; view selection for contrastive learning; +SimCLR, an open-source framework for contrastive learning.
🐹 Edge#129: SSL as Non-Contrastive Learning; DeepMind’s BYOL that makes non-contrastive SSL real; +Facebook’s Polygames, a framework to train deep learning agents through self-play.
✍🏽 Edge#131: SSL for Language; XLM-R, one of the most powerful SSL cross-lingual models ever built; +Facebook’s fastText, a library for representation learning in language tasks.
🗣 Edge#133: SSL for Speech; AVID, an SSL model for audio-visual tasks; +s3prl, an open-source framework for SSL speech models.
👓 Edge#135: SSL for Computer Vision; SEER, one of the most powerful SSL models for computer vision ever built; +Hugging Face library for computer vision.
Next week we will start MLOps series. Stay tuned!