This is the first of three blogs covering explainable artificial intelligence (xAI) technical reporting in the iToBoS project.
iToBoS aims to implement an artificial intelligence (AI) cognitive assistant framework to aid dermatologists in detecting melanoma (skin cancer) earlier than current state-of-the-art methods allow. To achieve this, the consortium has developed several different machine learning models, each of which fulfils a separate function needed for the overall tool to be effective.
However, this approach has its drawbacks. Because these models rely on deep learning techniques, they function as “black boxes”—meaning their human operators are unable to see how or why they produce certain outputs, hindering the oversight necessary to safely deploy the tool in a medical context. This lack of transparency complicates evaluation of the models in terms of the features that they rely on for decisions and consequently, it raises the question: why do these models fail when they do?
In the world of AI, efforts to counter the black box problem fall under the umbrella of explainable AI (xAI). Within iToBoS, partners from the Fraunhofer Heinrich-Hertz-Institute (FHHI) are responsible for exploring the most appropriate xAI methods for the tool. It’s a complex job, as each model operates differently, thus requiring different xAI approaches. This series of blogs breaks down the different approaches to explainability used within the iToBoS project.
Most of the iToBoS models handle imaging data using a specific architecture based on convolutional layers. For these models, the proposed xAI methods are categorized into local and global explanations.
Local explanations focus on identifying the features relevant to a specific prediction. The most prominent technique in this category is Layer-wise Relevance Propagation (LRP)[1]. LRP examines the various inputs used by a neural network to produce an output, assigning relevance scores to each. When it comes to image processing, this means identifying the individual pixels that contribute positively or negatively to the final output. By breaking down the various elements involved in the model’s decision-making, LRP achieves transparency and helps developers mitigate issues such as detections based on undesired image features.
Global explanations aim to identify prediction strategies that are employed by the model in general. Here the most used technique is Concept Relevance Propagation or CRP [2]. CRP integrates insights from both local and global xAI methods by explaining a model’s individual prediction using human interpretable concepts. In image processing, these concepts are the convolutional filters (kernels) that make up the network. In contrast to LRP, which shows the most relevant parts of the image for the overall decision process, CRP heatmaps highlight the relevant parts of the image that include evidence for a specific concept and calculates these concepts on individual network layers.
While LRP and CRP are used as the basis of the xAI methods for most iToBoS models, the diversity of models used in the project necessitate different approaches to explainability. The following blogs describe the project’s approach to mapping each model and its most appropriate xAI methods.
[1] G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K.-R. Müller, “Layer-Wise Relevance Propagation: An Overview,” in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K.-R. Müller, Eds., Cham: Springer International Publishing, 2019, pp. 193–209. doi: 10.1007/978-3-030-28954-6_10.
[2] R. Achtibat et al., “From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation,” Nat. Mach. Intell., vol. 5, no. 9, pp. 1006–1019, Sep. 2023, doi: 10.1038/S42256-023-00711-8.