This is the fourth in a series of blogs about a recent workshop coordinated by the iToBoS project focused on explainable artificial intelligence (xAI) and ethical and social issues related to AI.
The event was organised by project partners from Trilateral Research as part of an ongoing effort to collect diverse stakeholder perspectives for a forthcoming report on privacy, data protection, social and ethical issues, and xAI. All in, the event drew 19 participants, including patient advocates, clinicians and experts in IT, law, and ethics.
Explainability and social and ethical issues are central to the iToBoS project, which aims to create an intelligent full-body scanner and Computer Aided Diagnostics tool for faster, personalised diagnoses of melanoma. Because these tools incorporate AI, they bring with them the “black box” problem, which refers to the fact that AI models do not explain how they reach certain outputs, preventing oversight by making it impossible for human operators to evaluate the tool’s decision making. In a medical context, this creates unacceptable risks, essentially forcing users to blindly trust an algorithm’s outputs for high-stakes decisions, such as cancer screening. As such, a central component of iToBoS addresses xAI, developing methods which can explain how an AI model reaches certain outputs in order to enable transparency and oversight.
The workshop included a breakout room focused exclusively on explainability. The objective of this session was to facilitate a discussion among participants about the following risks:
- Medical professionals may misuse, miscommunicate, or misinterpret the outputs of AI-driven clinical support tools causing unintentional harm to the patient
- Clinicians are not required or mandated to partake in training and education in the interpretation of the system’s outputs
- It may be challenging to locate the decision-points in the algorithm’s development that should prompt points of responsibility
- If end users are not adequately trained in the use of the iToBoS tools (total body scanner and AI Cognitive Assistant (AICA) and the interpretation of its outputs, patients may receive insufficient clinical evaluation, misdiagnosis, or reduced quality of care
During the breakout rooms, participants discussed a series of questions via Slido, which represents their responses using word clouds. As xAI methods for many of the project’s AI systems have already been developed, the questions aimed to trigger conversations about how these AI systems and their corresponding xAI methods could be utilized and understood by end users, clinicians and patients, and the best ways to address the needs of these groups.
The first question asked: how can xAI impact transparency, accountability, and responsibility in health service provision? Answers to this question are reflected in the word cloud in Figure 1.
Figure 1. Slido results from breakout room discussion
Responses to this question centred on trust in AI systems as a central theme and the end goal of explainability efforts, which should enhance overall transparency and accountability. Other answers focused on how xAI ensures the traceability of a system’s actions, boosting accountability and responsibility, and promoting ethical AI use.
The discussions surrounding the following two questions are discussed in the next blog.
More details at Stakeholder workshop on the social and ethical impacts of xAI in healthcare.