Edge 255: Interpretability Methods: Accumulated Local Effects (ALE)
ALE method, OpenAI Microscope and IBM's AI 360 Explainability Toolkit.
In this issue:
Explore the accumulated local effects interpretability method.
Review the research behind OpenAI’s Microscope.
Deep dive into IBM’s AI Explainability 360 toolkit.
💡 ML Concept of the Day: Global Model-Agnostic Interpretability Methods: Accumulated Local Effects (ALE)
In the previous issue of this series about ML interpretability, we explored the concept of partial dependence plots(PDP). Today, we would like to discuss an alternative to PDP known as accumulated local effects(ALE). Just like PDP, ALE focuses on explaining the average impact of features on the model prediction. The main difference is that ALE mitigates some of the bias and performance challenges of PDP.
The main limitation of PDP arises when there is a significant correlation between the features in an ML model. In those cases, PDP outputs are highly unreliable. This is mostly due to the fact that PDP outputs are based on the averages of the predictions which obscure dependencies between the features. ALE also uses a conditional feature distribution as an input but focuses on differences in predictions instead of the average. Following our previous examples of a model that predicts the price of a house based on its distance from the city center and size of the building area, ALE will take the differences of the predictions of houses with around 3000sqt to those with 3500 sqt. The result provides a clear perspective of the impact of the size of the living area without the effect of the correlated features.