TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 253: Interpretability Methods: Partial Dependence Plots

Edge 253: Interpretability Methods: Partial Dependence Plots

Partial dependence plots, interpretable time series forecasting and Google's fairness indicators.

Dec 20, 2022
∙ Paid
20

Share this post

TheSequence
TheSequence
Edge 253: Interpretability Methods: Partial Dependence Plots
Share

In this issue:

  1. Provide an overview of the particial dependence plot interpretability method.

  2. Review the research behind temporal fusion transformers as an interpretable method for time-series forecasting.

  3. Deep dive into Google’s fairness indicators for ML interpretability.

💡 ML Concept of the Day: Global Model-Agnostic Interpretability Methods: Partial Dependence Plots   

As part of our series about ML interpretability, we recently cover global model-agnostic methods that try to infer a global explanation for the behavior of a “black box” model. Partial dependent plots(PDP) is one of the best-established global interpretability methods in ML. Conceptually, PDP illustrate how each predictor variable affects the model predictions. More specifically, PDP shows the dependencies between a target response and a set of relevant model features. In order to achieve this, PDP uses a regression function that quantifies the marginal impacts of each feature in the output prediction.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share