This is the second blog in a series about a recent workshop conducted by iToBoS partners on ethical and social issues related to artificial intelligence (AI), as well as key considerations related to explainable AI (xAI).
The workshop was moderated by iToBoS partners from Trilateral Research and was attended by 19 people, including patient advocates, clinicians, and experts in IT, law, and ethics. The event was one of a series of workshops facilitated by Trilateral Research, with the goal of collecting diverse perspectives to inform a forthcoming report on privacy, data protection, social and ethical issues, and xAI in relation to the iToBoS project.
iToBoS is developing an intelligent full-body scanner and an AI cognitive assistant to aid in the earlier diagnosis of melanoma. The tool integrates diverse types of data, including medical records, genetic data, and in vivo imaging, to deliver personalised care. The use of personal data brings risk alongside its benefits, increasing the potential cost of data misuse or breaches. As such, addressing social, ethical, legal, privacy, and data protection risks are central to the project. After introducing iToBoS, explainability, and Privacy Assessment+ (PIA+), which were summarized in the previous blog, the workshop split into breakout rooms to discuss key questions related to these issues.
The first breakout room centred on the question: what considerations should be introduced to resource the sustainability of the iToBoS platform? Answers to the question, which are depicted in the poll results in Figure 1, were broad and varied, but all reflected an important theme: the need to build trust with both clinicians and patients alike, to facilitate the adoption of the tool.
Figure 1. Poll results from breakout room discussion.
One set of answers, “publish (anonymized) AI models”, “accessible infrastructure”, and “available for everyone”, kickstarted a discussion about the importance of transparency and its role in building trust in the iToBoS tool among end users. Participants argued that granting end users access to the AI models underlying the tool, rather than being secretive about its inner workings, may contribute to the confidence building needed to facilitate the tool’s uptake in medical settings.
Similarly, the answers “validation” and “constant actualization” echo the emphasis on accountability and transparency. Participants made the case that validating the tool’s performance would build trust by providing evidence to support claims about the tool’s efficacy. Likewise, the comment “benchmarks” initiated a conversation about what participants considered a need for performance standards and regulations for AI tools, as well as punishments for the misuse of AI and misrepresentations of its accuracy.
On the heels of this discussion was a conversation about the types of training doctors should receive in relation to AI. Participants argued that clinicians should be introduced to AI as part of their regular training, and highlighted the importance of framing iToBoS and other AI products as tools to assist, rather than replace, medical experts and practitioners. However, while participants believed that AI training should become a standard element in medical school curricula, they argued that training on iToBoS specifically should be delivered by the tool’s developer. This stance is reflected in the answer “good technical support.”
The group then moved on to a question about the social and ethical risks of the iToBoS tool, which is covered in the next blog in the series.
More details at Stakeholder workshop on the social and ethical impacts of xAI in healthcare.