TheSequence

TheSequence

Share this post

TheSequence
TheSequence
🩺 Edge#141: MLOPs – Model Monitoring

🩺 Edge#141: MLOPs – Model Monitoring

plus the building blocks of interpretability and a few ML monitoring platforms to keep up with

Nov 16, 2021
∙ Paid
12

Share this post

TheSequence
TheSequence
🩺 Edge#141: MLOPs – Model Monitoring
Share

In this issue:

  • we discuss Model Monitoring;  

  • we explore Google’s research paper about the building blocks of interpretability;  

  • we overview a few ML monitoring platforms: Arize AI, Fiddler, WhyLabs, Neptune AI. 


💡 ML Concept of the Day: Model Monitoring 

As the first topic of our series about MLOps, we would like to focus on ML monitoring. Considered by many the cornerstone of MLOps, model monitoring is one of the essential building blocks of any ML pipeline. In some ways, I like to think about ML monitoring as the next phase of the application performance monitoring (APM) space that has accompanied the evolution of software technology trends. Rivalries like CA vs. BMC ruled the client-server days while App Dynamics vs. New Relic dominated the cloud and mobile era. ML is sufficiently unique that is likely to create a new generation of monitoring platforms that are specifically optimized for the performance of ML models. 

Part of what makes ML monitoring unique is the model-data duality in ML pipelines.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share