0

I understand the need for Explainability in AI. However, I am uncertain of what is meant by 'making AI explainable'.

What needs to be explainable? Is it the output of a model? Does it refer to the model itself? Does it refer to the user interface of the tool that the AI is a part of? Is it all of the above? If so, what is not included in Explainable AI?

What do we strive for when making AI 'explainable'? Are there commonly referred to definitions of explainability of AI, which go beyond 'to understand how a decision was made'?

Robin van Hoorn
  • 2,780
  • 2
  • 12
  • 33

1 Answers1

1

As a very rough answer, we need to be able to confirm a model that we came up with. So that we need to convince some one that our model is working logically well. What I meant by "logically well" is that the decision-making process should be clear either for a single data or the general model. I would suggest taking a look at this page: https://towardsdatascience.com/decrypting-your-machine-learning-model-using-lime-5adc035109b5

M SA
  • 11
  • 1