Stakeholder workshop on the social and ethical impacts of xAI in healthcare

Online, 8/07/2024.

iToBoS partners hosted a workshop on the social and ethical impacts of explainable artificial intelligence (xAI) in healthcare, where they discussed and debated a number of issues that strike at the heart of the iToBoS project: privacy, data protection, explainability, clinical efficacy, and ethics.

The workshop was hosted by four researchers from Trilateral Research, who introduced the iToBoS project, provided an overview of goals and key challenges, and moderated breakout rooms.

The iToBoS project is developing an AI diagnostic platform to aid in the early diagnosis of melanoma. The platform includes an AI-powered total body scanner and a Computer Aided Diagnostics (CAD) tool, which integrates multiple data sources including medical records, genomic data, and in vivo imaging. The project’s objectives include the following:

  • Achieving earlier, patient-tailored diagnoses of melanoma.
  • The development and validation of an AI cognitive assistant tool to empower healthcare practitioners and offer risk assessments for every mole.
  • The integration of all available information about each patient to personalise the prognosis.
  • The provision of methods for visualising, explaining, and interpreting AI models in an effort to overcome the “black box” challenge.

Considering the use of AI technology and highly sensitive data necessary to achieve the project goals, as well as the complexities of integrating novel technologies into healthcare, addressing questions related to safety, ethics, social impacts, and efficacy are central to the project.

The workshop occurred as part of a broader effort to collect the perspectives of patients, advocates, and experts to inform a forthcoming report led by Trilateral Research on privacy, data protection, and social and ethical impact assessments related to iToBoS. The event drew 19 attendees, including experts in IT, law, ethics, health, patient advocates, and partners internal to the project. The purpose of the workshop was to gather attendees’ perspectives on the ethical and social impacts of AI in healthcare, as well as the promise and pitfalls of xAI, and the progress of the iToBoS project in these areas. As such, all attendees were encouraged to speak openly and honestly, and the workshop was conducted in accordance with the Chatham House Rules, which emphasize participant confidentiality and open dialogue.

The event kicked off with a presentation from the Trilateral Research team. After providing an overview of the project, they introduced the concept of Privacy Impact Assessment+ (PIA+). The PIA+ is a tool to promote privacy-by-design, as defined in Article 25 of the GDPR, in the development of new technologies. As such, the approach aligns with ethics-by-design, a movement which encourages tech developers to integrate ethical and privacy concerns and safeguards into each stage of development, from the earliest stages of a project, rather than retroactively. A PIA+ is intended to help identify privacy, data protection, and social and ethical concerns, as well as mitigation measures for each. The process should be performed at key stages in the development of a given tool, namely: (i) requirements gathering, design and development, (ii) testing (pilots) and demonstrations, and (iii) evaluation and deployment.

The PIA+ overview was followed by a briefing on explainable AI (xAI). xAI aims to make the decision-making processes of AI systems understandable to end users. Without xAI, end users are unable to see how an AI tool reaches a certain output (commonly called the “black box” problem), making human oversight virtually impossible and limiting the acceptability of these technologies in high stakes settings, such as healthcare. xAI is underpinned by four concepts:

  • Fairness: assists in the detection and mitigation of bias against different groups.
  • Accountability: enables users to trust and understand AI processes.
  • Privacy: supports the integration of anonymisation tools.
  • Human agency and oversight: allows user to make informed decisions.

As such, the xAI process enables transparency, oversight, and other safeguards, making AI technologies objectively safer and more trustworthy.

Following these items, the team presented the table of identified risks within the project, which were grouped broadly under the themes of accountability, autonomy, transparency, and clinical effectiveness. These discussions set the ground for the breakout rooms on social and ethical issues and explainable AI, which are covered in the subsequent blogs.