๐ Piotr Niedzwiedz, neptune's CEO on Ideas About Machine Learning Experimentation
a fascinating read!
Itโs so inspiring to learn from practitioners and thinkers. Getting to know the experience gained by researchers, engineers, and entrepreneurs doing real ML work isย an excellent source of insight and inspiration. Shareย this interview if you like it. No subscription is needed.
๐คย Quick bio / Piotr Niedzwiedz
Tell us a bit about yourself. Your background, current role and how did you get started in machine learning?ย
Piotr Niedzwiedz (PN): I am Piotr, and I am the CEO of neptune.ai. Day to day, apart from running the company, I focus on the product side of things. Strategy, planning, ideation, getting deep into user needs and use cases. I really like it.
My path to ML started with software engineering. Always liked math and started programming when I was 7. In high school, I got into algorithmics and programming competitions and loved competing with the best. That got me to the best CS and Maths program in Poland which funny enough today specializes in machine learning.
Did internships at Facebook and Google and was offered to stay in the Valley. But something about being a FAANG engineer didnโt feel right. I had this spark to do more, build something myself.
So with a few of my friends from the algo days, we started Codilime, a software consultancy, and later a sister company Deepsense.ai machine learning consultancy, where I was a CTO.ย
When I came to the ML space from software engineering, I was surprised by the messy experimentation practices, lack of control over model building, and a missing ecosystem of tools to help people deliver models confidently.ย ย
It was a stark contrast to the software development ecosystem, where you have mature tools for DevOps, observability, or orchestration to execute efficiently in production.
And then, one day, some ML engineers from Deepsense.ai came to me and showed me this tool for tracking experiments they built during a Kaggle competition (which we won btw), and I knew this could be big. Asked around, and everyone was struggling with managing experiments. I decided to spin it off as a VC-funded product company, and the rest is history.ย
๐ ML Workย ย
Neptune.ai focuses on solving the problem of ML model metadata storage and management. Could you tell us about the vision and current capabilities of the platform?ย
PN: While most companies in the MLOps space try to go wider and become platforms that solve all the problems of ML teams, Neptune.aiโs strategy is to go deeper and become the best-in-class tool for model metadata storage and management.ย
In a more mature software development space, there are almost no end-to-end platforms. So why should ML, which is even more complex, be any different?ย
I believe that by focusing on providing the best developer experience for experiment tracking and model registry, we can become the foundation of any MLOps tool stack.
Today we have a super flexible data model that allows people to log and organize model metadata in any way they want.ย
You can:
create nested structures of parameters,ย
visualize and combine many metadata types,ย
track and compare dataset versions,ย
or register and share your production-ready models.
But we still see a lot to do when it comes to developer experience tailored for specific use cases. So in 2022, we will focus on three things:ย
Deliver the best developer experience around experiment tracking. Weโll improve the organization, visualization, and comparison for specific โML verticals,โ including computer vision, time series forecasting, and reinforcement learning.
Support all core model registry use cases. Weโll add better organization of model versions, stage transitions, reviews and approvals, better model access to packaged models.ย
Create more integrations with tools in the MLOps ecosystem. Weโll add integrations with tools for model deployment, pipelining and orchestration, and production model monitoring.
Experimentation is one of the core aspects of the lifecycle of ML solutions. What are the key components of a robust ML experimentation architecture and how is it different from traditional testing and versioning methods in software applications?ย ย ย ย
PN: Great question. In my opinion, it is:
Scalable backend: to log the metadata and donโt care about things crashing or slowing down your training.
Flexible and expressive API: to log the metadata how you want and easily plug it into your workflow.
Responsive user interface: to organize and compare all your models and experiments. Especially when you run a lot of them.ย ย
So, in many ways, it is exactly the same as many other observability solutions like the ELK stack (Elastic, Logstash, Kibana). I actually think that a lot of things in MLOps are very much the same as in traditional software development, but there are some differences.ย
Those differences come from various personas and jobs they want to solve with your tool.ย
You have data scientists, ML engineers, DevOps people, software engineers, subject matter experts who work together on ML projects. While all of them may need โML observabilityโ, the things they want to observe are completely different.ย
So for example, in experiment tracking, those main needs are:
Compare, visualize, and debug: you need features for comparing various data types, for combining different metadata like parameters, charts, learning curves in one viewย
Find and organize: advanced queries, grouping, saving different views of your data are crucial when you run a lot of experiments
Present and share: ML folks rarely work in isolation. You want to share your results to either debug, report, or document your work.ย
If you really want to deliver a good developer experience here, you need to go deep and really understand how people work with different data and model types (vision, nlp, forecasting). You need to make it easy for them to use their tools and try to enhance, not change their workflow.ย
For the model registry, you need to make the handover of the production-ready model from data scientists and ML engineers and then make it easy for the ML engineer to deploy/roll back/retrain that model.
Most experimentation methods today are focused on supervised learning techniques. What are the core differences between ML experiments in supervised learning compared to pretrained models, reinforcement learning or self-supervised methods?ย ย
PN: From my perspective, it is actually not that different. There is metadata about those processes that you want to compare, debug, organize, find, and share.
Last year I spent a lot of time rethinking our underlying data model to make those things easy regardless of the ML use case because of that. If you think about it โfrom first principlesโ, the things that are the most important, regardless of your use case, are flexibility and expressibility. And we build our product on those pillars.ย
But to give you an example, time series forecasting is a use case that is hard to solve with a rigid solution.ย
In forecasting, you rarely train one model. You actually train and test models on various time series, for example, one model per every product line or physical location of a shop.ย
And then, you want to visualize and evaluate your model based on all of those locations.
And you want to update the evaluation charts when new data comes in.ย
To do that comfortably, you may need a very custom way to log and display model metadata, but the underlying job you solve is the same: evaluating models.
In recent years, techniques such as neural architecture search (NAS) or AutoML have made inroads in automating the architecture of neural networks. Can NAS and AutoML methods play an important role streamlining ML experimentations or they are still very limited in their capabilities?ย ย ย ย ย ย
PN: Well, I think they could. But then it just moves a layer of abstraction higher IMHO.ย
You still have hyper-hyper parameters to optimize, NAS or AutoML models to compare etc.ย
I donโt think that will go away any time soon as it seems very dangerous to leave your production models to โdo their thingโ with no visibility into how they work (yes, it is hard), but at least how they were built.ย
Recently, we have seen ML experimentation capabilities being added to deep learning frameworks like TensorFlow or PyTorch and large MLOps platforms like AWS SageMaker. In your opinion, what of the following options better describes the future of ML experimentation:
Remain as standalone platforms.ย
Become part of larger MLOps stacks.ย
Become native components in deep learning frameworks. ย
PN: Yeah, I believe there will be standalone components that you can plug into your deep learning frameworks and MLOps stacks.ย
But both frameworks and end-to-end platforms will probably have some basic logging/tracking functionality in there as well. Something to get people started.ย
For example, letโs take data warehouses โ do they come with inbuilt BI/visualization components. No, we have a few market standard standalone platforms because the problem of data visualization is big/challenging enough and requires the product team to be focused on it. And some teams donโt even need any BI/visualization.
Model metadata management is similar. You should be able to plug it into your MLOps stack. I think it should be a separate component that integrates rather than a part of a platform.ย
When you know you need solid experiment tracking capabilities, you should be able to look for a best-in-class point solution and add it to your stack.ย
It happened many times in software, and I believe it will happen in ML as well. Weโll have companies providing point solutions with great developer experience. It wonโt make much sense to build it yourself unless you have a custom problem. Look at Stripe (payments), Algolia (search and recommendations), Auth0 (authentication and authorization).ย ย ย ย
But even in ML today. Imagine how weird would it be if every team was building their own model training framework like PyTorch. Why is experiment tracking, orchestration, or model monitoring any different?
I donโt think it is.ย
And so, I think weโll see more specialization around those core MLOps components. Perhaps at some point, adjacent categories will merge into one, just as we are seeing with experiment tracking and model registry merge into one metadata storage and management category.ย โฏย
๐ฅ Miscellaneous โ a set of rapid-fire questionsย ย
Favorite math paradox?ย ย
Decision-making paradox: Selecting the best decision-making method is a decision problem in itself.
What book would you recommend to an aspiring ML engineer?ย
โMachine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOpsโ by Valliappa Lakshmanan, Sara Robinson, Michael Munn.
Is the Turing Test still relevant? Any clever alternatives?ย
Seems that with GPT-3, GAN, and other generative models, it is becoming harder and harder to tell AI-generated content from reality. We are not quite there yet but almost.ย
When it comes to alternatives, maybe... I would like to see something more objective. E.g. alphacode getting toย Google Code Jam World Finals โ been there once, it is a very challenging task!ย ย ย
Does P equal NP?ย
Hey, if I knew, I would have reinvested this $1M into Neptune :)ย