Edge 259: Local Model-Agnostic Interpretability Methods: SHAP
SHAP method, MIT taxonomy for ML interpretability and BAIR's iModels framework.
In this issue:
We discuss the s, SHapley Additive exPlanations(SHAP) local ML interpretability method.
Review a taxonomy for ML interpretability techniques proposed by MIT.
Deep dive into the iModels framework created by the Berkeley AI Research lab.
💡 ML Concept of the Day: Local Model-Agnostic Interpretability Methods: SHAP
Among the local ML interpretability methods, SHapley Additive exPlanations(SHAP) stands out as one of the most popular within the data science community. Part of the popularity of SHAP comes from its game theoretic approach to ML interpretability. SHAP derives its name from its use of Shapley values which are a popular construct in multi-agent game theory. Conceptually, Shapley values estimate the individual contributions in a game of n participants splitting a given reward in a fairly manner. Shapley values were first introduced by Lloyd Shapley in 1953 as a way to evaluate the marginal contributions of each player despite possible coalitions.