TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 257: Local Model-Agnostic Interpretability Methods

Edge 257: Local Model-Agnostic Interpretability Methods

Local model-agnostic interpretability, IBMs ProfWeight research and the InterpretML framework.

Jan 03, 2023
∙ Paid
13

Share this post

TheSequence
TheSequence
Edge 257: Local Model-Agnostic Interpretability Methods
Share

In this issue:

  1. Introduce the concept of local model-agnostic interpretability

  2. Discuss IBM’s ProfWeight method that combines interpretability and accuracy in a single model.

  3. Deep dive into InterpretML, one of the most popular ML interpretability frameworks.

💡 ML Concept of the Day: Local Model-Agnostic Interpretability Methods

The last few issue of this series have focused on global model-agnostic interpretability methods which try to extrapolate explanations about the complete behavior of an ML model. Local model-agnostic methods provide an alternative to its global counterpart by focusing on explaining individual predictions. In simple terms, local-method help to answer the following question: “for this particular example, why did the model make this particular decision?”

A practical way to think about the utility of global vs. local method is monitoring vs. debugging. While global-methods are useful to regularly monitor the behavior of a ML model, local-methods are more effective when comes to debug specific behaviors. In the spectrum of local ML interpretability, there are several methods that are quite relevant:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share