In TheSequence Guest Post, our partners explain what ML and AI challenges they help deal with. In this article, Alon Gubkin, CTO of Aporia, discusses a proper ML infrastructure and offers a guide on how to build one using open-source tools.
How to Build an ML Platform from Scratch
As your data science team grows and you start deploying models to production, the need for proper ML infrastructure β and a standard way to design, train and deploy models β becomes crucial.
In this guide, we will build a basic ML Platform using open-source tools like Cookiecutter, DVC, MLFlow, FastAPI, Pulumi and more. Weβll also see how to monitor for model drift using Aporia. Final code is available on GitHub.
Keep in mind that this type of project can be huge β often taking a lot of time and resources β therefore, our toy ML Platform wonβt have tons of features β just the basics, but it should teach you the basic principles of how to build your own ML platform.Β
Our toy ML Platform will use DVC for data versioning, MLFlow for experiments management, FastAPI for model serving, and Aporia for model monitoring.
Weβre going to build all of this on top of AWS, but in theory, you could also use Azure, Google Cloud, or any other cloud provider.
Itβs important to note that when building your own machine learning platform, you should NOT take these tools for granted. You should evaluate alternatives β as they may be more appropriate for your specific use case and business needs.
Model Template
The first component in our machine learning platform is the model template, which weβll build using Cookiecutter for templating, and Poetry for package management.
The idea is that when a data scientist starts working on a new project, they will clone our model template (which contains a standard folder structure, Python linting, etc.), develop their model, and easily deploy it when itβs ready for production.
The ML models template will contain basic training and serving code.
Data & Experiment Tracking
The training code in the model template will use the MLFlow client to track experiments.
Those experiments will be sent to the MLFlow Server that weβll run on top of Kubernetes (EKS).
The model artifact itself will be saved in an S3 Bucket (the Artifact Storage), and metadata about experiments will be saved in a PostgreSQL database.
Weβll also track versions of the dataset using DVC in an S3 bucket.
Model Serving
For model serving, weβll build a FastAPI server responsible for preprocessing, making predictions, etc.
These model servers are going to run on Kubernetes, and weβll expose them to the internet using Traefik.
Infrastructure as Code
All our infrastructure is going to be deployed using Pulumi, an Infrastructure-as-Code tool similar to Terraform.
If you arenβt familiar with the concept, you can read more about it before continuing. Here are some major advantages of using this method:
Versioned: Your infrastructure is versioned, so if you have a bug, you can easily revert it to a previous version.
Code Reviewed: Each change to the infrastructure can be code reviewed, and youβre less prone to mistakes.
Sharable: You can easily share infrastructure components, just send the source code for the component.
With Pulumi, you can choose to write your infrastructure in a real programming language, such as TypeScript, Python, C#, and more.
Even though the natural choice for an ML platform would be Python, I chose TypeScript because, at the time of writing this post, Pulumiβs implementation of TypeScript is more feature complete.Β Β
Repositories & CI/CD
Weβre going to have 2 GitHub repositories:
mlplatform-infra β the Pulumi code for the shared infrastructure of the ML Platform. Infrastructure that isnβt model-specific. Kubernetes, MLFlow, S3 buckets, etc.
model-template β the model template code that data scientists can clone, including basic training code, FastAPI server, etc.
For CI/CD, weβre going to use GitHub Actions.Β
Model Monitoring
Weβll now set up a data drift monitor using Aporia. You can play with Aporia using the free community edition with Aporia cloud or install it on Kubernetes using Pulumi.
Start by creating a free account. Once in the platform, click the βAdd Modelβ button on the Models Management dashboard.
Follow the instructions to integrate your model. Then, youβll be able to define monitors for Model Drift, Performance Degradation, and more.
Get started!
If you prefer to follow a 2-hour live coding session, check out this YouTube video I made for the MLOps.community. Or you can read the complete how-to guide here.
Have fun!