TheSequence

Share this post
👨🏼‍🎓👩🏽‍🎓 The Standard for Scalable Deep Learning Models
thesequence.substack.com

👨🏼‍🎓👩🏽‍🎓 The Standard for Scalable Deep Learning Models

Weekly news digest curated by the industry insiders

Nov 28, 2021
13
Share this post
👨🏼‍🎓👩🏽‍🎓 The Standard for Scalable Deep Learning Models
thesequence.substack.com

📝 Editorial 

Large deep learning models seem to be the norm these days. While deep neural networks with trillions of parameters are very attractive, they are nothing short of a nightmare to train. In most training techniques, the computational cost scales linearly with the number of parameters, resulting in impractical costs for most scenarios. In recent years, a mixture of experts (MoE) has emerged as a powerful alternative. Conceptually, MoE operates by partitioning a task into subtasks and aggregating the output. When applied to deep learning models, MoE has proven to scale sublinear with respect to the number of parameters, making the only viable option to scaling deep learning models to trillions of parameters.  

The value proposition of MoE has sparked the creation of new frameworks for supporting this technique. Facebook AI Research (FAIR) recently launched fairseq for using MoE in language models.  Similarly, researchers from the famous Beijing Academy of Artificial Intelligence (BAAI) open-sourced FastMoE, an implementation of MoE in PyTorch. A few days ago, Microsoft Research jumped into the MoE contributions space with the release of Tutel, an open-source library to use MoE to enable the implementation of super large deep neural networks. One of the best things about Tutel is that Microsoft didn’t only focus on the open-source release but also deeply optimized the framework for GPUs supported in the Azure platform streamlining the adoption of this MoE implementation. Little by little, MoE is becoming the gold standard of large deep learning models.   

Share


🍂🍁 TheSequence Scope is our Sunday free digest. To receive high-quality educational content about the most relevant concepts, research papers and developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🍂🍁

🗓 Next week in TheSequence Edge:

Edge#145: we discuss model observability and its difference from model monitoring; we explore MLTrace, a reference architecture for observability in ML pipelines;  we overview Arize AI that enables the foundation for ML observability.   

Edge#146: we deep dive into Arize AI ML observability platform.  


Now, let’s review the most important developments in the AI industry this week

🔎 ML Research

Deep Learning Demystified 

The team from Walmart Labs published a remarkable blog post explaining the mathematical and computer science foundations of deep learning →read more on Walmart Global Tech blog

Predictive Text Selection and Federated Learning  

Google Research published a blog post detailing how they used federated learning to improve the Smart Text Selection feature in Android →read more on Google Research blog

Safety Envelopes in Robotic Interactions 

Carnegie Mellon University published a paper detailing a probabilistic technique for inferring surfaces that guarantee the safety of robots while interacting with objects in an environment →read more on Carnegie Melon University blog


🤖 Cool AI Tech Releases

Tutel 

Microsoft Research open-sourced Tutel, a high-performance mixture of experts (MoE) library to train massively large deep learning models →read more on Microsoft Research blog

GauGAN2 

NVIDIA released a demo showcasing its GauGAN2 model that can generate images from textual input →read more on NVIDIA blog


💸 Money in AI

For ML&AI:

  • Open-source neural search company Jina.ai raised a $30 million Series A funding round led by Canaan Partners. Hiring in Berlin/Germany, Beijing and Shenzhen/China.

AI-powered

  • AI-powered transcription company Verbit raised $250 million in a Series E round led by Third Point Ventures. Hiring in Tel Aviv/Israel, New York/US, Kyiv, Ukraine.

  • Intelligent computing platform for digital R&D Rescale raised $105 million in an expanded series C funding round. Hiring globally.

  • AI-powered e-commerce fulfillment startup Deliverr raised $250 million in a Series E funding round led by Tiger Global. Hiring mostly remote.

  • Medical platform for AI and Visualization LifeVoxel raised $5 million in a seed round. Hiring in the US and Canada.

IPO

  • China’s AI giant SenseTime received regulatory approval for Hong Kong IPO.

Share this post
👨🏼‍🎓👩🏽‍🎓 The Standard for Scalable Deep Learning Models
thesequence.substack.com
Comments

Create your profile

0 subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

TopNewCommunity

No posts

Ready for more?

© 2022 Jesus Rodriguez, Ksenia Semenova
Privacy ∙ Terms ∙ Collection notice
Publish on Substack Get the app
Substack is the home for great writing