This is the third blog in a series about a workshop hosted by the iToBoS project, focused on ethical and social issues related to artificial intelligence (AI) and key considerations about explainable AI (xAI) approaches.
The workshop, conducted by iToBoS partners from Trilateral Research, drew 19 participants, including patient advocates, clinicians, and experts in IT, law, and ethics. The workshop was part of a broader effort to collect diverse perspectives to inform a forthcoming report on privacy, data protection, social and ethical issues, and xAI, led by Trilateral Research.
Why are these issues so central to iToBoS? The project aims to build an AI-powered full body scanner accompanied by an intelligent diagnostics tool to aid in the faster diagnosis of melanoma. The tool assists in the provision of personalised care by integrating a variety of data for each patient, including medical records, genetic data, and in vivo imaging. This is where both the promise and the risks of the tool originate—while the compilation of so much personal data offers highly customized treatment, the mishandling of this data could have serious impacts on patients, including genetic discrimination. As such, addressing and mitigating these risks is a core effort of the project.
This breakout room centred on the question: what recommendations should be implemented to limit ethical and social risks for future iterations of the AI Cognitive Assistant (AICA) and iToBoS? Answers to the question, which are depicted in the visualization of poll results in Figure 1, covered many elements but reflected a common theme, the importance of earning patient and clinician trust.
Figure 1. Poll results from breakout room discussion
Some answers revolved around transparency and trust building. Responses such as “data transparency” and “data availability” reflect the importance of providing access to data shaping the AI tool’s outputs as a means to provide oversight and build trust. Alongside these answers was “data protection,” which participants recognised as highly important considering the sensitivity of the data used by the tool, which they argued necessitates high quality protective measures alongside transparency initiatives.
With the next set of answers, “algorithms,” “train,” and “training set,” the group emphasized the importance of producing effective, accurate, and trustworthy AI tools by training them using high quality datasets. The group also argued that medical AI tools should be enhanced on a continuous basis via ongoing training using new, high quality, and accurate datasets.
Finally, participants expressed concerns about the potential downsides of AI—inaccuracy, bias, and misuse. The phrase “no trust environment” triggered a conversation in which participants described being told AI tools were accurate and cutting-edge, when they clearly functioned poorly. The group raised concerns about the potential for the availability of AI tools to encourage negative behaviours in end users, such as people using ChatGPT for diagnoses and thereby being exposed to anxiety-inducing misinformation. Finally, with the answers “more (diverse) data”, “skin”, “world/races”, and “types”, participants discussed the well-documented propensity of AI algorithms to replicate the biases in their training datasets. This risk is heightened in the context of the iToBoS project, where the use of genetic data may expose information about patients’ ethnicity, family history, and genetic predisposition to certain illnesses.
The second breakout room, which is covered in the next blog in this series, addressed issues relating to xAI in the iToBoS project.
More details at Stakeholder workshop on the social and ethical impacts of xAI in healthcare.