🤓😎 Emerging ML Methods Recap
Recap of a topic that we’ve covered in previous issues of TheSequence Edge. These collections will help you navigate specific topics and fill the gaps if you missed something
💡 Emerging Machine Learning Methods
Traditional machine learning theory teaches us that the universe is divided into two forms of learning: supervised and unsupervised. Reality is a bit more complicated and there are dozens of learning paradigms that sit between those two extremes. While supervised methods dominate the modern machine learning ecosystem, they have already hit a wall and can not be adopted in many scenarios, as they depend on highly accurate labeled datasets that are rarely available. On the other end, the promise of unsupervised machine learning methods seems a bit distant. So the challenge machine learning researchers face is how to create methods that leverage some of the benefits of supervision without requiring expensive training datasets.
->become Premium to read the following Edges about the new learning paradigms that are becoming more and more relevant
Edge#14 (read without subscription): Semi-Supervised Learning
Among the methods created in recent years, semi-supervised learning has received a wide level of adoption in practical applications.
The goal of semi-supervised learning is to enable the training of ML models by using a small labeled dataset and a large volume of unlabeled data. Semi-supervised learning tries to mitigate the dependency on expensive labeled datasets by learning from unlabeled data. How is that possible?
In Edge#14: we covered the concept of semi-supervised learning in more detail and discussed a paper that proposes a data augmentation method to advance semi-supervised learning.
Edge#26: Self-Supervised Learning
Another subdiscipline that has recently captured a lot of momentum is known as self-supervised learning. One way to think about self-supervised learning is as autonomous supervised learning. As a form of representation learning, self-supervised learning is able to build knowledge without requiring large amounts of labeled data. This capability tries to address one of the main limitations of modern deep learning, that is, the dependency on large labeled datasets. Even though self-supervised learning is still in very early stages, it already counts some high profile champions within the deep learning community, such as AI-legends Yann LeCun and Yoshua Bengio, who refer to self-supervised learning as essential to achieving human-level intelligence.
In Edge#26 we explained the concept of self-supervised learning; overviewed the self-supervised method for image classification proposed by Facebook; and explore Google’s SimCLR framework for advancing self-supervised learning.
Edge#27: Contrastive Learning
Contrastive learning can be considered a self-supervised learning method. Part of the inspiration behind contrastive learning has been drawn from human’s ability to learn new concepts by drawing associations from other high-level concepts. For instance, imagine that we are studying prehistoric animals and come across a species that we haven’t seen before. Despite the unknown, we can draw certain conclusions about our new friend, such as whether it is a bird or a dinosaur, or whether it can swim or fly. We can do that by applying known high-level concepts to our new target. That’s the essence of contrastive learning.
In Edge#27: we presented the concept of contrastive learning; and explored Google’s research on view selection for contrastive learning.
Reading TheSequence Edge regularly, you become smarter about ML and AI. Trusted by the major AI Labs and universities of the world.