To understand why Explainable AI is important, let us take an example of diabetic retinopathy - a diabetes complication that affects the eyes. It is caused by damage to the blood vessels of the light-sensitive tissue at the back of the eye (retina). Let's assume we use a Deep Learning model with Convolutional Neural Networks for the classification of the normal eye from the diabetic eye. We can easily make a model that does a fairly good job with a validation accuracy of 90%. Then, a few questions may arise - what did the model see in the image to classify it? Did the model look into the same diagnostic parts of the images as done by the doctors? Or did it do something else? This is a very important context wherein a person can lose eyesight if he/she is misdiagnosed (if our model says the eye is fine but the eye was damaged, someone is going to be in big trouble!). In such cases, explainability is meant to engender trust from a model.