This is the sixth blog in a series about a workshop on social and ethical issues and explainable artificial intelligence (xAI) recently hosted by the iToBoS project.
The event was facilitated by project partners from Trilateral Research, who organised the event as part of a broader effort to collect stakeholder perspectives to inform an upcoming report on privacy, data protection, social and ethical issues, and xAI. The event drew 19 participants, including patient advocates, clinicians and experts in law and ethics.
The event included a breakout room to discuss several questions related to xAI, some of which were addressed earlier in this blog series. xAI plays a large role in the project; because iToBoS is creating an AI-powered full-body scanner and computer-aided diagnostics tool, explainability is crucial to overcome the “black box” problem and enable the oversight and transparency necessary for the tool to be adopted in the medical context.
The next discussion question in the breakout room was: What types of training resources could be utilized to help the clinicians understand the model performance metrics as well as the xAI outputs better? Would the patients be interested in that kind of knowledge as well?
Figure 1. Slido results from breakout room discussion
This question sought feedback on how clinicians could be educated about the AI model’s performance and explainability methods, in order to ensure they could understand the results. Throughout the discussion, clinicians emphasised their interest in receiving more detailed training in AI and explainability in order to implement AI tools with confidence.
Within the group, clinicians expressed a high degree of interest in the accuracy of AI models. As such, they argued that developers should include explanations of what the metrics used in AI models actually measure, and why these metrics were chosen over others for a specific application. Another theme that emerged in this discussion related to the importance of training clinicians in the fundamental principles of AI models, in order to enable them to understand both performance metrics and explanations in technical terms. The group suggested that this training could be delivered via demo applications, which they said could be conducted using tech data and designed to inform clinicians about the specific operations of the models and xAI methods. Other suggestions to implement training included interactive workshops, visual tutorials, and user-friendly documentation.
When it came to patients’ needs, the group concluded that they had a different set of requirements than clinicians. Patients depend on clinicians who are operating the complex machines (in this case, the AI system) to explain their diagnosis. As such, clinicians require a greater degree of understanding of the mechanics of an AI tool than their patients. Similarly, the clinicians in the group preferred being the ones to explain the results directly to their patients.
To conclude, the breakout session on explainability provided valuable insights into optimizing AI systems and their explainability methods to better support the end-users. The discussions underscored the necessity of fostering effective communication between clinicians and developers to build trust and transparency. This collaboration is essential to ensure that AI-driven diagnoses are accurate, unbiased, and clearly understood by both clinicians and patients. By enhancing the interpretability of AI outputs and bridging knowledge gaps, we can work towards more reliable and ethical AI applications in healthcare, ultimately improving patient care and clinical decision-making.
More details at Stakeholder workshop on the social and ethical impacts of xAI in healthcare.