xAI Technical Reporting – 2nd Part

This is the second blog in a series of three covering the iToBoS project’s approach to explainable artificial intelligence (xAI) technical reporting.

iToBoS seeks to develop an artificial intelligence (AI)-powered scanner to aid in the faster diagnosis of skin cancer. To achieve this goal, the tool employs a variety of deep learning models, most of which employ image scanning techniques. However, the predictive power of these AI tools comes with its downsides—namely, the “black box” problem. This refers to the fact that AI models don’t explain how they achieve a specific output, hindering human oversight and creating a major hurdle slowing the deployment of these technologies in medical contexts.

In AI research, explainability or xAI refers to methods which aim to shine light into the black box by uncovering the way tools achieve certain outputs, thus enabling the transparency and human oversight necessary to safely deploy these technologies in the real world. In iToBoS, xAI is led by the Fraunhofer Heinrich-Hertz-Institute (FHHI). It’s a complex task, as each model used in the tool requires the application of different xAI methods. The first blog in this series explained the two broad categories of xAI methods, local and global, used in the project. This blog introduces the iToBoS approach to mapping the different types of models used and the most suitable xAI approach for each.

Mole Detection Model:

This model is responsible for detecting moles on skin images and plays a key role by flagging the extracted lesions to subsequent models for further processing. For local explanations of the mole detection model, which seek to identify the features that contribute to the model’s outputs, the project uses Layer-wise Relevance Propagation (LRP). This approach, which is explained in more detail in the previous blog, assigns relevance scores to each of the inputs that contribute to a model’s output—in the case of the mole detection model, this means identifying which pixels in an image of a skin lesion led to the final result. For global explanations, which aim to identify the model’s prediction strategy more broadly, the project uses an adapted version of the Concept Relevance Propagation (CRP) method, which enables its use in localization models. This adapted version, is known as L-CRP [1].

Mole Tracking Model:

This model is responsible for image matching, which enables the observation and tracking of changes to a patient’s lesions over time. For the mole tracking model, iToBoS uses local explainability methods only, in this case a modified version of LRP called BiLRP [2]. This method generates similarity heatmaps by computing and combing local attributions for each input feature pair.

Mole Classification Model:

This model accepts images of lesions as input and classifies them according to preexisting categories. The mole classification model is a type of convolutional neural network (CNN), a deep learning algorithm mainly used for computer vision and object recognition, including image classification. The xAI methods applied to this model are both local (LRP) and global (CRP).

The final blog in the series explains the rest of the models used within iToBoS and the xAI methods applied to each.

[1] F. Pahde, G. Ü. Yolcu, A. Binder, W. Samek, and S. Lapuschkin, “Optimizing Explanations by Network Canonization and Hyperparameter Search,” Mar. 27, 2023, arXiv: arXiv:2211.17174. doi: 10.48550/arXiv.2211.17174.

[2] O. Eberle, J. Büttner, F. Kräutli, K.-R. Müller, M. Valleriani, and G. Montavon, “Building and Interpreting Deep Similarity Models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 3, pp. 1149–1161, Mar. 2022, doi: 10.1109/TPAMI.2020.3020738.