What is Explainable AI?

Explainability refers to the ability to explain the choice taken by the AI system. The rise of explainability is attributed to most AI systems being a complex and black box with no explanation for the actions.

Explainability leads to transparency, which means being upfront and visible about the action AI takes, which helps in evaluating the action taken by AI systems, whether these are consistent with the values widely accepted as societal ethics.

Explainability is part of Ethical AI, which deals with broader things like privacy, security, transparency, and others. Ethics in AI is a set of values, principles and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.

AI and Machine Learning systems have access to tremendous amounts of data and computing power, they will only become more effective and more used as the age of information evolves. With this pace of development and progress, it may not be long before AI technologies become gatekeepers for the advancement of vital public interests and sustainable human development. This makes the Explainability in AI a crucial point of discussion and deliberation.

Last updated