TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 259: Local Model-Agnostic Interpretability Methods: SHAP

Edge 259: Local Model-Agnostic Interpretability Methods: SHAP

SHAP method, MIT taxonomy for ML interpretability and BAIR's iModels framework.

Jan 10, 2023
∙ Paid
15

Share this post

TheSequence
TheSequence
Edge 259: Local Model-Agnostic Interpretability Methods: SHAP
Share

In this issue:

  1. We discuss the s, SHapley Additive exPlanations(SHAP) local ML interpretability method.

  2. Review a taxonomy for ML interpretability techniques proposed by MIT.

  3. Deep dive into the iModels framework created by the Berkeley AI Research lab.

💡 ML Concept of the Day: Local Model-Agnostic Interpretability Methods: SHAP

Among the local ML interpretability methods, SHapley Additive exPlanations(SHAP) stands out as one of the most popular within the data science community. Part of the popularity of SHAP comes from its game theoretic approach to ML interpretability. SHAP derives its name from its use of Shapley values which are a popular construct in multi-agent game theory. Conceptually, Shapley values estimate the individual contributions in a game of n participants splitting a given reward in a fairly manner. Shapley values were first introduced by Lloyd Shapley in 1953 as a way to evaluate the marginal contributions of each player despite possible coalitions.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share