Explainable AI with SHAP — Income Prediction Example

Renee LIN
3 min readSep 11, 2022

In recent years, Explainable AI(XAI) or Interpretable machine learning has attracted increasing attention since higher accuracy without an understanding of the mechanism cause trust issue. For example, patients will question why they are diagnosed with a disease by a data-driven model. At least, interpretation helps validate the model.

Despite its importance, the field is underspecified. Therefore, researchers like Lipton[1] tried to define the problem and discuss different approaches to solve it. This post will give a brief introduction to the terminologies/classification of XAI and explore one of the most popular methods SHAP (SHapley Additive exPlanations) through a simple income classification problem.

  1. Type of interpretation methods
  2. SHAP brief introduction
  3. Income prediction

1. Type of interpretation methods

(1) Model-agnostic and Model-specific

Some methods could be used in all kinds of models, while others are created for interpreting specific models.

(2) Global and Local explanation

Local means the analysis is conducted to understand how a specific prediction is made. On the other hand, a global explanation studies what…

--

--

Renee LIN
Renee LIN

Written by Renee LIN

Passionate about web dev and data analysis. Huge FFXIV fan. Interested in health data now.

No responses yet