Humans do not trust blindly. The key idea behind Explainable Artificial Intelligence (XAI) is to enable formal understanding of decision-making by any model. In particialar, neural networks models in AI have provided us with remarkable results however humans have yet to uncover how to explain the decision making behind these models and treat them as black boxes. Neural networks are being used in important tasks and it has become important to understand models in sensitive domains such as healthcare, defence, automobile, and finance because of the critical decision making in these areas, for example, a human life, e.f., in health care.
Neural networks map inputs to outputs through a sequence of layers. It has been difficult to explain the results of neural networks as the parameters and weights in these models are abstract and disconnected from the real world.