🎨
Explainable-AI
  • Explainable AI
  • Preface
  • How to use this book?
  • Contents
  • What is Explainable AI?
  • Why do we need Explainablity?
  • Explainable systems and Black box systems
  • Types of Explainability Techniques
  • Explainable Models
    • Linear Regression
      • Assumptions
      • Model
      • Statistical Interpretation
    • Decision Trees
      • How Do They Work?
      • Creating the model
      • Interpretation
  • Explainability Techniques for Classical ML
    • SHAP (SHapley Additive exPlanations)
    • Surrogate model
    • LIME (Local Interpretable Model-Agnostic Explanations)
    • PDP (Partial Dependence Plot)
    • ICE (Individual Conditional Expectation Plots)
    • ALE (Accumulated Local Effects Plot)
  • Datasets
    • Medical Cost Personal Dataset
    • Telecom Churn Dataset
    • Sales Opportunity Size Dataset
    • Pima Indians Diabetes Dataset
  • Implementation of these techniques on different models
    • Logistic Regression - SHAP
    • Random Forest - LIME
    • GBM - PDP
    • GBM - ICE
    • Deep Learning - Surrogate
  • Future scope
  • Contributors
  • Citing this Book
Powered by GitBook
On this page

Was this helpful?

Implementation of these techniques on different models

PreviousPima Indians Diabetes DatasetNextLogistic Regression - SHAP

Last updated 4 years ago

Was this helpful?

In this section, we will show how to do the implementation of the Interpretability techniques explained until now. These techniques will be applied to different Machine Learning models.

Each model is interpreted using a different technique.

Each implementation will have a Google co-laboratory linked to it. Every co-laboratory notebook has model interpretability demostration with different datasets. We encourage everyone to go there and run the notebooks to see how interpretability can be achieved with Python code.

You can even download the notebooks and play around with them in your local system.

To know more about the Google co-laboratory refer the .

link