TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 251: Global Model-Agnostic Interpretability

Edge 251: Global Model-Agnostic Interpretability

Global model-agnostic interpretability, student-teacher intrepetability methods and the Lucid library.

Dec 13, 2022
∙ Paid
18

Share this post

TheSequence
TheSequence
Edge 251: Global Model-Agnostic Interpretability
Share

In this issue:

  1. We explore the concept of global model-agnostic interpretability methods.

  2. We review OpenAI’s research about using machine teaching to build interpretable models.

  3. We explore the Lucid library as a framework for model visualization.

💡 ML Concept of the Day: Global Model-Agnostic Interpretability

In the previous edition of our series about ML interpretability, we introduced the concept of post-hoc interpretability as a method to derive explanations about the behavior of the model without assuming any knowledge of its architecture. Post-hoc interpretability is also known as model-agnostic interpretability and can be classified in two main groups: global and local. Today, we will explore the key concepts behind global model-agnostic interpretability methods.

Conceptually, global model-agnostic interpretability methods attempt to explain the behavior of an ML model as a whole. Explanations produced by this type of methods use distribution of predictions and not on individual data points. Global interpretability methods are particularly useful when comes to debug a model, understanding which features impact the model behavior or which variables play an important role in the construction of the model.

Throughout the evolution of ML, there have been several global model-agnostic methods that have become omnipresent in ML interpretability stacks:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share