Edge 253: Interpretability Methods: Partial Dependence Plots
Partial dependence plots, interpretable time series forecasting and Google's fairness indicators.
In this issue:
Provide an overview of the particial dependence plot interpretability method.
Review the research behind temporal fusion transformers as an interpretable method for time-series forecasting.
Deep dive into Google’s fairness indicators for ML interpretability.
💡 ML Concept of the Day: Global Model-Agnostic Interpretability Methods: Partial Dependence Plots
As part of our series about ML interpretability, we recently cover global model-agnostic methods that try to infer a global explanation for the behavior of a “black box” model. Partial dependent plots(PDP) is one of the best-established global interpretability methods in ML. Conceptually, PDP illustrate how each predictor variable affects the model predictions. More specifically, PDP shows the dependencies between a target response and a set of relevant model features. In order to achieve this, PDP uses a regression function that quantifies the marginal impacts of each feature in the output prediction.