Edge 257: Local Model-Agnostic Interpretability Methods
Local model-agnostic interpretability, IBMs ProfWeight research and the InterpretML framework.
In this issue:
Introduce the concept of local model-agnostic interpretability
Discuss IBM’s ProfWeight method that combines interpretability and accuracy in a single model.
Deep dive into InterpretML, one of the most popular ML interpretability frameworks.
💡 ML Concept of the Day: Local Model-Agnostic Interpretability Methods
The last few issue of this series have focused on global model-agnostic interpretability methods which try to extrapolate explanations about the complete behavior of an ML model. Local model-agnostic methods provide an alternative to its global counterpart by focusing on explaining individual predictions. In simple terms, local-method help to answer the following question: “for this particular example, why did the model make this particular decision?”
A practical way to think about the utility of global vs. local method is monitoring vs. debugging. While global-methods are useful to regularly monitor the behavior of a ML model, local-methods are more effective when comes to debug specific behaviors. In the spectrum of local ML interpretability, there are several methods that are quite relevant: