The transparency of Artificial Intelligence (AI) models is an essential criterion for the deployment of AI in high-risk settings, such as medical applications. Consequently, numerous approaches for explaining AI systems have been proposed over the years (Samek et al., 2021).
However, with a multitude of eXplainable AI (XAI) approaches at one’s disposal, finding an answer to the question which method is most suitable for the application at hand is difficult to answer. This depends on a variety of factors, for example, whether the XAI method is compatible with the model to be explained, and beyond that, whether the aspect of the model’s reasoning explained by the explainer fulfils the stakeholder’s requirement. Once those points are dealt with, there still remains the question which method truly is the “best” choice.