In this issue:
we start a new series about machine learning interpretability;
we discuss Manifold, an architecture for debugging ML models;
we explore Meta’s Captum, a framework for deep learning interpretability.
Enjoy the learning!
💡 ML Concept of the Day: A New Series About Machine Learning Interpretability
“If you can’t explain it simply, you don’t understand it well enough”. The famous quote from Albert Einstein certainly doesn’t seem to apply to machine learning (ML) systems. The frantic pace of ML development is pushing the space into bigger and more complex models which are nearly impossible to understand. As a result, interpretability has become one of the most important disciplines in ML research and development. This new series will explore the most important ML interpretability methods and technologies developed in recent years.
The benefits of interpretability in ML models are both obvious and sometimes hard to qualify. In general, the value proposition of ML interpretability can be decomposed in four fundamental benefits: