This is the fifth blog in a series about a recent workshop organised by the iToBoS project on explainable artificial intelligence (xAI) and social and ethical issues related to AI.
The event was hosted by project partners from Trilateral Research as part of a broader effort to engage with stakeholders ahead of a report on privacy, data protection, social and ethical issues, and xAI. The event attracted 19 participants, including patient advocates, clinicians, and experts in IT, law and ethics.
The event included a breakout room focused exclusively on xAI, which refers to methods to explain AI decision-making. Why is this so important? The iToBoS project is developing an intelligent full body scanner and diagnostics tool to deliver faster, personalized melanoma diagnoses. But AI models present the “black box” problem, meaning they provide no explanation of how outputs are produced, so human end users aren’t able to provide meaningful oversight. To counter this, xAI methods can be applied to break down the processes which lead to certain results, rendering them understandable to human end users and therefore acceptable to use in high-stakes settings such as healthcare.
The next discussion question in the breakout room was: How can the implemented xAI methods, such as Layer-wise Relevance Propagation (LRP) and Concept Relevance Propagation (CRP), enhance non-technical people’s understanding of the iToBoS AI models? Are there other techniques that might be more intuitive and user-friendly? The responses to this question are reflected in the word map in Figure 1.
Figure 1. Slido results from breakout room discussion
This question explored how the project’s existing explainability methods could impact how people without technical expertise understand the AI systems. Based on the word cloud and the discussion, user-friendly “visualization” of the outputs emerged as a key component for boosting the efficacy of explainability methods. Participants discussed enhancing visualizations with simple text descriptions that could be used to explain the visualizations further.
One participant raised a concern about whether the generated explanations would be consistent over consecutive runs of the AI models, and how users can be assured that the explainability results are definitive. This concern was framed in the context of enhancing trust in the model’s results. In response to this question, another participant explained that AI models operate on probabilities and therefore do not consistently provide the same answers for a given input. Therefore, explainability cannot guarantee the same results each time. However, it can offer users a way to trace back and understand how the final answers were derived.
Another response in the word cloud was the “so what analysis.” This refers to the position that a technical definition of an explainability method holds no value for clinicians or patients who lack the knowledge needed to comprehend it. This means that explainability must differ for developers, who have technical expertise and may need to evaluate their model’s operations, and non-technical users, who want to understand what factors and alternatives were considered for reaching a decision, and to receive that explanation in non-technical language. For example, this could include identifying which data were used or what the model considered in terms of alternative treatments.
During this discussion, a participant suggested that in order for explainability to be useful, end users need to receive basic training in AI. This conversation is continued in the following blog.
More details at Stakeholder workshop on the social and ethical impacts of xAI in healthcare.