✅ ML Training at Scale Recap
TheSequence is the best way to build and reinforce your knowledge about machine learning and AI
Last week we finished our mini-series about high scale ML training, one of our most popular series so far. Here is a full recap for you to catch up with the topics we covered. As the proverb (and many ML people) says: Repetition is the mother of learning ;)
💡The challenges of ML training at scale
Training is one of the most important aspects of the lifecycle of ML models. In an ecosystem dominated by supervised learning techniques, having proper architectures for training is paramount for building robust ML systems. In the context of ML models, training is one of those aspects that is relatively simple to master at a small scale. But its complexity grows exponentially (really) with the size and complexity of a neural network. Over the last few years, the ML community has made significant advancements in both the research and implementation of high scale ML training methods. We will dedicate the next few weeks of TheSequence Edges to exploring the latest ML training methods and architectures powering some of the largest ML models in production.
Forward this email to those who might benefit from reading it or give a gift subscription.
→in Edge#181 (read it without a subscription), we discuss the complexity of ML training architectures; explain SeedRL, an architecture for massively scaling the training of reinforcement learning agents; overview Horovod, an open-source framework created by Uber to streamline the parallelization of deep learning training workflows.
→ in Edge#183, we explore data vs model parallelism in distributed training; discuss how AI training scales; overview Microsoft DeepSpeed, a training framework powering some of the largest neural networks in the world.
→ in Edge#185, we overview Centralized vs. Decentralized Distributed Training Architectures; explain GPipe, an Architecture for Training Large Scale Neural Networks; explore TorchElastic, a Distributed Training Framework for PyTorch.
→ in Edge#187, we overview the different types of data parallelism; explain TF-Replicator, DeepMind’s framework for distributed ML training; explore FairScale, a PyTorch-based library for scaling the training of neural networks.
→ in Edge#189, we discuss pipeline parallelism; explore PipeDream, an important Microsoft Research initiative to scale deep learning architectures; overview BigDL, Intel’s open-source library for distributed deep learning on Spark.
→ in Edge#191, finalizing the distributed ML training series, we discuss the fundamental enabler of distributed training: message passing interface (MPI); overview Google’s paper about General and Scalable Parallelization for ML Computation Graphs; share the most relevant technology stacks to enable distributed training in TensorFlow applications.
Next week we are going back to deep learning theory. Our next mini-series will cover the subject of graph neural networks (GNNs). Super interesting! Remember: by reading TheSequence Edges regularly, you become smarter about ML and AI 🤓