LIME (Local Interpretable Model-Agnostic Explanations)

Code Implementation here

Regression Classification

Introduction

In general, black-box models are highly complex and to be able to generate an overall global explanation for the model’s prediction is actually a very difficult job. The LIME algorithm was first brought into light for its ability to help understand image classification in 2016. With this algorithm, we could identify the parts of the input that triggered or affected our output the most.

LIME is short for Local Interpretable Model-Agnostic Explanations. It is a technique that can be applied to understand the predictions of a black-box model, by understanding the individual predictions with the help of a simpler, and directly interpretable model.

LIME can be used in the case of tabular, text and even image data. It generates a local explanation for a particular row, input word vector or a superpixel in all three cases respectively. From the name, we can deduce that it is a model agnostic method - which means that this technique can be applied to any model. Also from the name, we can see that this generates Local Explanations. This means that this technique helps generate explanations for a single instance/row in our data set.

This technique is highly impactful when we want to focus on understanding individual predictions rather than a group of predictions made by a model. An example of such a use case would be the healthcare industry, wherein each individual has a unique biological body mechanism. It would be wrong to generalize a model and assume the model would predict correctly for every human. Since every individual is unique, we would want to understand the single predictions and ensure that our model has not made a mistake.

In the image below (taken from the original paper), we have a model that predicts if a person has the flu or not. For this particular individual, LIME indicates that sneeze and headache had contributed to the “flu” prediction, while “no fatigue” is evidence against it. With these, a doctor can make an informed decision about whether to trust the model’s prediction.

LIME uses the local fidelity criterion. This means that any explanation generated using LIME must correspond to the vicinity of the instance being predicted. To create this vicinity, LIME perturbs the instance being predicted.

The above image is a toy example presented in the original paper. In the example, LIME is used to generate an explanation for a single instance marked as a "bold cross". The other red cross and blue circles are sample instances in our data set that are in the vicinity of the instance corresponding to the prediction explanation (the bold cross).

In the image, the proximity is represented by the size of the cross and the circles. The larger the cross or the circle, the closer it is to the instance under investigation. The dashed line represents a linear explanation that is locally true.

We have to understand that features that may seem important locally need not be important globally. By global importance, we mean that a feature is important for all the instance predictions in the data set. But in local importance, the features may or may not be the same as global because only a single instance prediction is taken into consideration.

Visualizations

Pros

  • The technique can be applied to tabular, text and image data

  • The technique can be easily used as it is implemented in a Python library (lime.py)

Cons

  • The vicinity of a prediction can vary and find the right kernel width that generates an accurate local explanation is a difficult job

  • The technique is inconsistent and there is often instability in the explanations

References

  1. "Why Should I Trust You?": Explaining the Predictions of Any Classifier - Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin

Last updated