🎨
Explainable-AI
  • Explainable AI
  • Preface
  • How to use this book?
  • Contents
  • What is Explainable AI?
  • Why do we need Explainablity?
  • Explainable systems and Black box systems
  • Types of Explainability Techniques
  • Explainable Models
    • Linear Regression
      • Assumptions
      • Model
      • Statistical Interpretation
    • Decision Trees
      • How Do They Work?
      • Creating the model
      • Interpretation
  • Explainability Techniques for Classical ML
    • SHAP (SHapley Additive exPlanations)
    • Surrogate model
    • LIME (Local Interpretable Model-Agnostic Explanations)
    • PDP (Partial Dependence Plot)
    • ICE (Individual Conditional Expectation Plots)
    • ALE (Accumulated Local Effects Plot)
  • Datasets
    • Medical Cost Personal Dataset
    • Telecom Churn Dataset
    • Sales Opportunity Size Dataset
    • Pima Indians Diabetes Dataset
  • Implementation of these techniques on different models
    • Logistic Regression - SHAP
    • Random Forest - LIME
    • GBM - PDP
    • GBM - ICE
    • Deep Learning - Surrogate
  • Future scope
  • Contributors
  • Citing this Book
Powered by GitBook
On this page
  • What is Deep Learning?
  • Making the Model
  • Implementation of Interpretability
  • Visualizations

Was this helpful?

  1. Implementation of these techniques on different models

Deep Learning - Surrogate

PreviousGBM - ICENextFuture scope

Last updated 4 years ago

Was this helpful?

What is Deep Learning?

  • Deep learning methods are based on artificial neural networks with representation learning

  • They can be supervised, semi-supervised or unsupervised

  • Architectures include deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks

Making the Model

Dataset: Pima Indians Diabetes; Target: Outcome

The data is trained by making a Keras Sequential model with 3 layers.

Accuracy: 72.73%

Implementation of Interpretability

In this section we will interpret a Keras Sequential model with a Decision Tree model as a surrogate.

We found the best hyper-parameters for our DecisionTreeClassifier using GridSearch and then fit it on the training data without the outcome. The output column is replaced by the predictions made by the Keras Sequential model and then fit into the Decision Tree model.

Visualizations

The above tree serves as a surrogate model to the Deep Learning model. As we can see, the most important feature is the root node i.e., Glucose.

There are 399 samples in the test set that are diabetic and 215 that are non-diabetic. If the person has low Glucose, then Age is the next most important feature.

If the Glucose is high, then BMI is the next most important feature in determining if the person is diabetic or not.

The full tree can be interpreted in a similar step-by-step manner wherein every level has an important feature related to it.

Code Implementation Here