👨🏼🎓👩🏽🎓 The Standard for Scalable Deep Learning Models
Weekly news digest curated by the industry insiders
Large deep learning models seem to be the norm these days. While deep neural networks with trillions of parameters are very attractive, they are nothing short of a nightmare to train. In most training techniques, the computational cost scales linearly with the number of parameters, resulting in impractical costs for most scenarios. In recent years, a mixture of experts (MoE) has emerged as a powerful alternative. Conceptually, MoE operates by partitioning a task into subtasks and aggregating the output. When applied to deep learning models, MoE has proven to scale sublinear with respect to the number of parameters, making the only viable option to scaling deep learning models to trillions of parameters.
The value proposition of MoE has sparked the creation of new frameworks for supporting this technique. Facebook AI Research (FAIR) recently launched fairseq for using MoE in language models. Similarly, researchers from the famous Beijing Academy of Artificial Intelligence (BAAI) open-sourced FastMoE, an implementation of MoE in PyTorch. A few days ago, Microsoft Research jumped into the MoE contributions space with the release of Tutel, an open-source library to use MoE to enable the implementation of super large deep neural networks. One of the best things about Tutel is that Microsoft didn’t only focus on the open-source release but also deeply optimized the framework for GPUs supported in the Azure platform streamlining the adoption of this MoE implementation. Little by little, MoE is becoming the gold standard of large deep learning models.
🍂🍁 TheSequence Scope is our Sunday free digest. To receive high-quality educational content about the most relevant concepts, research papers and developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🍂🍁
🗓 Next week in TheSequence Edge:
Edge#145: we discuss model observability and its difference from model monitoring; we explore MLTrace, a reference architecture for observability in ML pipelines; we overview Arize AI that enables the foundation for ML observability.
Edge#146: we deep dive into Arize AI ML observability platform.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
Deep Learning Demystified
The team from Walmart Labs published a remarkable blog post explaining the mathematical and computer science foundations of deep learning →read more on Walmart Global Tech blog
Predictive Text Selection and Federated Learning
Google Research published a blog post detailing how they used federated learning to improve the Smart Text Selection feature in Android →read more on Google Research blog
Safety Envelopes in Robotic Interactions
Carnegie Mellon University published a paper detailing a probabilistic technique for inferring surfaces that guarantee the safety of robots while interacting with objects in an environment →read more on Carnegie Melon University blog
🤖 Cool AI Tech Releases
Microsoft Research open-sourced Tutel, a high-performance mixture of experts (MoE) library to train massively large deep learning models →read more on Microsoft Research blog
NVIDIA released a demo showcasing its GauGAN2 model that can generate images from textual input →read more on NVIDIA blog
💸 Money in AI