🎙 Piotr Niedzwiedz, neptune's CEO on Ideas About Machine Learning Experimentation
a fascinating read!
It’s so inspiring to learn from practitioners and thinkers. Getting to know the experience gained by researchers, engineers, and entrepreneurs doing real ML work is an excellent source of insight and inspiration. Share this interview if you like it. No subscription is needed.
👤 Quick bio / Piotr Niedzwiedz
Tell us a bit about yourself. Your background, current role and how did you get started in machine learning?
Piotr Niedzwiedz (PN): I am Piotr, and I am the CEO of neptune.ai. Day to day, apart from running the company, I focus on the product side of things. Strategy, planning, ideation, getting deep into user needs and use cases. I really like it.
My path to ML started with software engineering. Always liked math and started programming when I was 7. In high school, I got into algorithmics and programming competitions and loved competing with the best. That got me to the best CS and Maths program in Poland which funny enough today specializes in machine learning.
Did internships at Facebook and Google and was offered to stay in the Valley. But something about being a FAANG engineer didn’t feel right. I had this spark to do more, build something myself.
So with a few of my friends from the algo days, we started Codilime, a software consultancy, and later a sister company Deepsense.ai machine learning consultancy, where I was a CTO.
When I came to the ML space from software engineering, I was surprised by the messy experimentation practices, lack of control over model building, and a missing ecosystem of tools to help people deliver models confidently.
It was a stark contrast to the software development ecosystem, where you have mature tools for DevOps, observability, or orchestration to execute efficiently in production.
And then, one day, some ML engineers from Deepsense.ai came to me and showed me this tool for tracking experiments they built during a Kaggle competition (which we won btw), and I knew this could be big. Asked around, and everyone was struggling with managing experiments. I decided to spin it off as a VC-funded product company, and the rest is history.
🛠 ML Work
Neptune.ai focuses on solving the problem of ML model metadata storage and management. Could you tell us about the vision and current capabilities of the platform?
PN: While most companies in the MLOps space try to go wider and become platforms that solve all the problems of ML teams, Neptune.ai’s strategy is to go deeper and become the best-in-class tool for model metadata storage and management.
In a more mature software development space, there are almost no end-to-end platforms. So why should ML, which is even more complex, be any different?
I believe that by focusing on providing the best developer experience for experiment tracking and model registry, we can become the foundation of any MLOps tool stack.
Today we have a super flexible data model that allows people to log and organize model metadata in any way they want.
create nested structures of parameters,
visualize and combine many metadata types,
track and compare dataset versions,
or register and share your production-ready models.
But we still see a lot to do when it comes to developer experience tailored for specific use cases. So in 2022, we will focus on three things:
Deliver the best developer experience around experiment tracking. We’ll improve the organization, visualization, and comparison for specific “ML verticals,” including computer vision, time series forecasting, and reinforcement learning.
Support all core model registry use cases. We’ll add better organization of model versions, stage transitions, reviews and approvals, better model access to packaged models.
Create more integrations with tools in the MLOps ecosystem. We’ll add integrations with tools for model deployment, pipelining and orchestration, and production model monitoring.
Experimentation is one of the core aspects of the lifecycle of ML solutions. What are the key components of a robust ML experimentation architecture and how is it different from traditional testing and versioning methods in software applications?
PN: Great question. In my opinion, it is:
Scalable backend: to log the metadata and don’t care about things crashing or slowing down your training.
Flexible and expressive API: to log the metadata how you want and easily plug it into your workflow.
Responsive user interface: to organize and compare all your models and experiments. Especially when you run a lot of them.
So, in many ways, it is exactly the same as many other observability solutions like the ELK stack (Elastic, Logstash, Kibana). I actually think that a lot of things in MLOps are very much the same as in traditional software development, but there are some differences.
Those differences come from various personas and jobs they want to solve with your tool.
You have data scientists, ML engineers, DevOps people, software engineers, subject matter experts who work together on ML projects. While all of them may need “ML observability”, the things they want to observe are completely different.
So for example, in experiment tracking, those main needs are:
Compare, visualize, and debug: you need features for comparing various data types, for combining different metadata like parameters, charts, learning curves in one view
Find and organize: advanced queries, grouping, saving different views of your data are crucial when you run a lot of experiments
Present and share: ML folks rarely work in isolation. You want to share your results to either debug, report, or document your work.
If you really want to deliver a good developer experience here, you need to go deep and really understand how people work with different data and model types (vision, nlp, forecasting). You need to make it easy for them to use their tools and try to enhance, not change their workflow.
For the model registry, you need to make the handover of the production-ready model from data scientists and ML engineers and then make it easy for the ML engineer to deploy/roll back/retrain that model.
Most experimentation methods today are focused on supervised learning techniques. What are the core differences between ML experiments in supervised learning compared to pretrained models, reinforcement learning or self-supervised methods?
PN: From my perspective, it is actually not that different. There is metadata about those processes that you want to compare, debug, organize, find, and share.
Last year I spent a lot of time rethinking our underlying data model to make those things easy regardless of the ML use case because of that. If you think about it “from first principles”, the things that are the most important, regardless of your use case, are flexibility and expressibility. And we build our product on those pillars.
But to give you an example, time series forecasting is a use case that is hard to solve with a rigid solution.
In forecasting, you rarely train one model. You actually train and test models on various time series, for example, one model per every product line or physical location of a shop.
And then, you want to visualize and evaluate your model based on all of those locations.
And you want to update the evaluation charts when new data comes in.
To do that comfortably, you may need a very custom way to log and display model metadata, but the underlying job you solve is the same: evaluating models.
In recent years, techniques such as neural architecture search (NAS) or AutoML have made inroads in automating the architecture of neural networks. Can NAS and AutoML methods play an important role streamlining ML experimentations or they are still very limited in their capabilities?
PN: Well, I think they could. But then it just moves a layer of abstraction higher IMHO.
You still have hyper-hyper parameters to optimize, NAS or AutoML models to compare etc.
I don’t think that will go away any time soon as it seems very dangerous to leave your production models to “do their thing” with no visibility into how they work (yes, it is hard), but at least how they were built.
Recently, we have seen ML experimentation capabilities being added to deep learning frameworks like TensorFlow or PyTorch and large MLOps platforms like AWS SageMaker. In your opinion, what of the following options better describes the future of ML experimentation:
Remain as standalone platforms.
Become part of larger MLOps stacks.
Become native components in deep learning frameworks.
PN: Yeah, I believe there will be standalone components that you can plug into your deep learning frameworks and MLOps stacks.
But both frameworks and end-to-end platforms will probably have some basic logging/tracking functionality in there as well. Something to get people started.
For example, let’s take data warehouses – do they come with inbuilt BI/visualization components. No, we have a few market standard standalone platforms because the problem of data visualization is big/challenging enough and requires the product team to be focused on it. And some teams don’t even need any BI/visualization.
Model metadata management is similar. You should be able to plug it into your MLOps stack. I think it should be a separate component that integrates rather than a part of a platform.
When you know you need solid experiment tracking capabilities, you should be able to look for a best-in-class point solution and add it to your stack.
It happened many times in software, and I believe it will happen in ML as well. We’ll have companies providing point solutions with great developer experience. It won’t make much sense to build it yourself unless you have a custom problem. Look at Stripe (payments), Algolia (search and recommendations), Auth0 (authentication and authorization).
But even in ML today. Imagine how weird would it be if every team was building their own model training framework like PyTorch. Why is experiment tracking, orchestration, or model monitoring any different?
I don’t think it is.
And so, I think we’ll see more specialization around those core MLOps components. Perhaps at some point, adjacent categories will merge into one, just as we are seeing with experiment tracking and model registry merge into one metadata storage and management category.
💥 Miscellaneous – a set of rapid-fire questions
Favorite math paradox?
Decision-making paradox: Selecting the best decision-making method is a decision problem in itself.
What book would you recommend to an aspiring ML engineer?
“Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps” by Valliappa Lakshmanan, Sara Robinson, Michael Munn.
Is the Turing Test still relevant? Any clever alternatives?
Seems that with GPT-3, GAN, and other generative models, it is becoming harder and harder to tell AI-generated content from reality. We are not quite there yet but almost.
When it comes to alternatives, maybe... I would like to see something more objective. E.g. alphacode getting to Google Code Jam World Finals – been there once, it is a very challenging task!
Does P equal NP?
Hey, if I knew, I would have reinvested this $1M into Neptune :)