On Thursdays, we do deep dives into one of the freshest research papers or technology frameworks that is worth your attention. Our goal is to keep you up to date with new developments in AI to complement the concepts we debate in other editions of our newsletter.
💥 Deep Dive: From Feature Stores to Feature Platforms
Feature stores have emerged as a central piece of the MLOps stack and, in 2021, became a consolidated category. MLOps platforms have started to incorporate feature storage and lifecycle management capabilities as first-class citizen.
The origin of feature store came from Uber Michelangelo, which was the platform that allowed Uber to go from 0 ML models in production to 1000s (Check Edge#77 about How Feature Stores Were Started). After Michelangelo, Uber was using ML in every aspect of its business: pricing, demand forecasting, predicting ETAs, matching, etc. Uber Michelangelo was focused on solving the end-to-end ML workflow: from transforming raw data into features, serving those features to models, deploying the models, making predictions, and monitoring predictions.
What are Feature Stores and What Problem Do They Solve?
The team at Uber Michelangelo recognized that, out of the entire end-to-end ML platform, the most difficult part of putting ML applications into production was managing and transforming the data.
“Finding good features is often the hardest part of machine learning and we have found that building and managing data pipelines is typically one of the most costly pieces of a complete machine learning solution,” says Mike Del Balso, ex-Uber Michelangelo, now CEO of Tecton.
Building production-grade data pipelines was often the main bottleneck in getting models to production. In addition, providing better data was the single most effective way of improving the model’s performance:
Incorporating streaming and real-time data to make decisions based on the freshest data available
Eliminating sources of bias, such as train-serve skew and data leakage
So why are data pipelines the main bottleneck to getting models in production? Without a solution, teams first create features in local notebooks, and once they’re ready to go into production, need to build bespoke pipelines by piecing together disparate tools: data from a warehouse, processing with a compute engine and/or a stream processor, setting up an orchestrator, storage in low-latency stores, and implementing serving infrastructure.
In addition, teams compromise on using real-time data. Managing real-time data is too difficult without the right tools, so only the most sophisticated teams build the infrastructure required to process real-time data into machine learning models.
To solve these challenges, the Michelangelo team built the industry’s first feature store. Internal Feature Stores have emerged at every other major ML player: Facebook, Google, LinkedIn, Netflix, Twitter, etc.
A feature store is aimed at being the central backbone of an ML application. It provides data scientists and data engineers a way to define features using SQL or Python, and automatically sets up the compute to transform data from the data warehouse into features, keep those features fresh as new data comes in, and make those features available to train models or to serve features during online inference.
The Evolution of Feature Stores
The Uber Michelangelo team decided to found Tecton to make production-grade ML accessible to every organization. Tecton unveiled its feature store in 2020, which was soon followed by large players introducing their own products in the category: AWS SageMaker introduced its feature store in Dec 2020. In 2021, Databricks and Google launched their own feature stores. Snowflake and Azure partnered with Tecton and Feast (the most popular open-source feature store – read in Edge#78) to bring feature stores to their own customers.
The questions are: can one feature store serve them all? What are important differences in capabilities between offerings? What will the evolution of feature stores bring us?
Understanding the Market
Every feature store needs to provide storing, sharing, and re-using of features. It is the least common denominator of feature store capabilities. But – storing, sharing and re-using features solves only part of the problem. Teams still have to build feature pipelines to generate the feature values, and this is often the main bottleneck experienced by data teams.
Feature stores have an opportunity to do more – to solve the end-to-end feature lifecycle by also automating the data pipelines that transform raw data into features.
Tecton, the Feature Platform for ML
Tecton, coming from the veterans of Uber Michelangelo, goes beyond the capabilities of a regular feature store, tending to be a complete feature platform.
With a feature platform, users define features using simple SQL or Python.
Tecton automatically transforms raw data into features by running and orchestrating data pipelines. It allows for large-scale retrieval for training, and low-latency retrieval for online serving.
In addition, Tecton goes beyond batch data and supports the transformation of streaming and real-time data, allowing teams to use the freshest data available to make predictions.
With a full feature platform, data engineers don’t need to re-build data pipelines, allowing teams to put ML applications into production in a matter of days.
Conclusion
While the AI/ML industry has put a heavy emphasis on model training, optimization and serving, it’s important to remember that high-quality data is the single most important factor in increasing model accuracy. The best model in the world can’t make a prediction without the right data. Getting high-quality data to our models is hard and requires complicated data engineering work. Data teams are often resigned to using sub-optimal data to simplify their data engineering challenges.
Feature stores have become an essential part of the MLOps stack as they are purpose-built to solve this data challenge of ML.
However, feature stores are not enough. There’s an opportunity to expand into a complete feature platform that solves the end-to-end data problem for ML, managing the entire processing from data source to models.
Where can I find blogs/stories with *concrete toy examples* of feature stores and their utility? I've seen many posting on feature stores over the past year, yet I still don't really understand. To those like myself that have not worked on large-scale production ML, it's pretty murky.