🟢⚪️ Edge#202: How to Ship ML-powered Apps with Baseten
Building a performant model is just the start, what to do next?
On Thursdays, we do deep dives into one of the freshest research papers or technology frameworks that is worth your attention. Our goal is to keep you up to date with new developments in AI and introduce to you the platforms that deal with the ML challenges.
💥 Deep Dive: How to Ship ML-powered Apps with Baseten
It's easier than ever to build machine learning (ML) models. With libraries like TensorFlow, Keras, Scikit-learn, and PyTorch, almost anyone with some basic coding skills can put together a model in a matter of days or weeks. But building a performant model is just the start. The challenge is delivering that model as a production-ready solution. This involves an entirely different set of skills and tasks, from standing up infrastructure to designing business-facing UI and everything in between.
In this deep dive, we will see how Baseten, an end-to-end platform for delivering ML models as production-ready solutions, deals with these challenges.
Model development vs. model deployment
The first era of artificial intelligence (AI) engineers didn't have ML libraries, no-code platforms, or AutoML tools. They had to write their own algorithms from scratch. Deployment was similarly complex, requiring custom hardware and software.
More recently, model development has become a far easier task. But model deployment has become more complex, with the need for highly robust, scalable solutions that can handle millions of predictions per day.
The difference between these two areas is crucial to understand. Model development is typically well-understood by data scientists, but models need to be packaged and integrated as a software component, which demands an entirely different skillset. As a result, even simple models can take over six months and multiple engineering teams to deliver.
Broadly speaking, there are five stages to delivering a machine learning solution:
Prepare data
Train model
Deploy model
Setup backend
Productize model
Once data is prepared, training a model is an iterative, scientific process that's right in the wheelhouse of data scientists.
Deploying a model is where things start to get complicated. This is the domain of software engineers, who need to wrap the model in a robust, scalable solution that can handle many predictions. Deployment includes containerizing the model in tools like Docker and standing up Kubernetes to manage those containers. Ideally, you’re also ensuring that your deployment pipeline has version control, dependency management, and debuggability to set your team up for success as it scales.
Along with deployment, some backend work is required to handle pre- and post-processing at the time of inference. And to integrate the model into your broader stack, additional backend services are needed to call predictions and push predictions to other data stores and tools.
Finally, if your model is intended for business users, you’ll need to build a frontend for teams to access and take action on predictions. This involves building a frontend view in HTML & CSS.
The role of the data scientist
In most organizations, data scientists are responsible for data preparation and model building. After that, the lines start to blur. In some cases, data scientists hand off the model to an engineering team for deployment. In others, data scientists work with engineers throughout the entire process.
Handling all aspects of an ML solution is a huge undertaking that requires many different skills. As a result, it's important to have a clear understanding of the role of the data scientist in the model delivery process.
How Baseten solves these challenges
What we like about Baseten is that it is built specifically for data scientists. This means you don't need to know much about MLOps, backend or frontend development in order to use it. Think about Baseten as an end-to-end platform for delivering machine learning models as production-ready solutions. Data scientists can deploy a model behind an API with a few lines of Python code right from one's Jupyter notebook. Conveniently enough, there's no need to learn new frameworks or toolkits.
If you need more than an API, Baseten also makes it easy to integrate with other services and data stores. A simple drag-and-drop interface lets you design full-stack, interactive views for business users. And when it's time to ship your application, you can share a live link to your web app in a few clicks.
Baseten comes with many models out of the box for a wide variety of tasks, for example:
sentiment analysis
image classification
object detection
And because Baseten is library-agnostic, you can deploy your models from TensorFlow, Scikit-learn, PyTorch, or your custom framework of choice.
Worklets and Blocks to Create Applications
Baseten’s application logic is done through Worklets, which are visual representations of code and model execution backed by an API endpoint. You can think of it like a DAG. Baseten’s visual approach allows the data scientist to focus on their application’s business logic and flow, like "Classify an image" or "Detect objects in an image,” without needing to worry about infrastructure and DB instances.
Worklets are composed of blocks, where each block acts as a work unit, helping keep code organized and easily understandable. For example, data scientists can add actions like “Invoke Model” or “Send a Slack Message” using Baseten’s pre-built blocks. For more specific tasks, you can write custom Python code. Teams can also test and debug code, track run logs, and connect and query from data stores – all from one central environment.
Empowering all sorts of teams with ML
Data science is no longer the domain of a few PhDs in the basement. With the rise of tools like Baseten, organizations can more easily deploy and realize the impact of machine learning models across departments. After all, the use-cases of AI in business applications are very varied.
Marketing teams, for instance, are facing steep competition in the digital landscape, limited budgets, and short timeframes to show results. They need all the help they can get to reach and engage their audiences. Machine learning can be used to detect user behavior patterns, predict which products or services a customer is most likely to buy, and build dynamic marketing lists for more effective campaigns.
Sales teams are another department that can benefit from ML. With a never-ending stream of leads, it's brutally hard for sales reps to follow up with every single one. Machine learning can be used to automatically prioritize and score leads, so that reps can focus their time on the most promising prospects.
Meanwhile, financial teams are under constant pressure to detect and prevent fraud. ML can help with real-time fraud detection, protecting organizations from costly losses.
Customer support teams are also turning to ML to build chatbots that can handle routine inquiries and free up time for agents to focus on more complex issues.
Conclusion
These are just a few examples of how machine learning can be used to empower teams across an organization. It’s one thing to build a model, but you need a reliable platform to deploy models for these and other tasks without needing to worry about the infrastructure or MLOps.
Ultimately, if you're looking for a platform to help you deploy machine learning models, Baseten is worth considering. It's easy to use, fast to get going, library-agnostic, and provides a complete solution for shipping full-stack applications.
Thanks for this, I will definitely check it out in my project