Guidance recommendations for the future implementation of AI in the medical context

The novel total body scanner and AI Cognitive Assistant developed in the iToBoS project can provide diagnostic advantages to clinicians and contribute to improved care and outcomes for melanoma patients.

Given these potential benefits, the iToBoS solutions should continue to be developed and implemented beyond the project lifecycle. To ensure that any further development the iToBoS solutions is achieved in an ethically and socially responsible manner, it is crucial that accountability for the socio-cultural and ethical impacts of the iToBoS solutions continues to be a priority.

The examination of the socio-cultural and ethical impacts of the iToBoS solutions demonstrated that these impacts are interlinked, and that addressing issues will be an ongoing process which will require collaboration and cooperation across disciplines and perspectives.

Recommendations were made for the future development and use of the iToBoS solutions based on the exploration of five key socio-cultural and ethical considerations relevant to the project: privacy and data protection, autonomy, transparency, trust, and clinical effectiveness. Here a summary of the recommendations which are intended to guide stakeholders in the future development and implementation of the iToBoS solutions is presented.

Privacy and Data Protection

  1. It is critical to keep a nuanced understanding of privacy in mind when engaging with stakeholders in relation to the iToBoS solutions, especially when considering patient perspectives and how their views on privacy may impact their decision-making.

  2. To respect the right of privacy when developing and implementing the iToBoS solutions, it is essential to maintain a strong regulatory environment, and guarantee compliance with data protection regulations. This must also be supported by allocating adequate resources to appropriately manage FAIR (Findable, Accessible, Interoperable, and Reusable) data requirements and promote RRI (Responsible Research and Innovation) culture.

  3. In order to alleviate the concerns that patients may have about the use of their health data within the iToBoS solutions, deployers should maintain their commitment to preserving patient privacy and continue to provide up-to-date information to patients on how the iToBoS solutions are governed, how and where their data will be stored, and how their data will and will not be used.

Autonomy

  1. Obtaining informed consent for the iToBoS solutions is not merely a box-ticking exercise, it is an important process which must prioritise patient autonomy and value the patient’s perspective. The explanatory element of this process is critical and requires clearly and concisely communicating information in a manner that is culturally and socially sensitive, and consider factors such as language, age, ability or education. Regarding the patient’s perspective, a patient’s decision-making process can be impacted by their views on privacy, their commitment to altruism, and their perception of risk and potential sense of desperation. Consideration must be given to any emotional distress a patient may be experiencing during this process, and psycho-social support should be made easily available to them.

  2. Informed consent frameworks need to be adapted to meet the challenge of establishing a minimum standard of AI literacy for patients when they are being asked to consent to their health data being shared with AI systems. This should be considered in conjunction with the measures implemented and support provided to address basic literacy and health literacy issues.

Transparency

  1. Explainable AI (XAI) must be defined in relation to the needs of the diverse, yet distinct, AI system stakeholders. It is necessary to develop tailored explainability strategies and techniques to communicate how AI solutions work and how decisions are reached. These strategies and techniques should be developed for the full lifecycle of the AI system’s implementation, including understanding how to differentiate between successful results and failures. Groundwork for this should be incorporated into medical training and supported by technical design and security measures.

  2. Co-creation engagement strategies should be leveraged between all parties, particularly the patients and clinicians, in order to understand their explainability needs, assess AI literacy levels, and take varying perspectives into account. When designing explainability, the following question should be answered: ‘What is being explained to me and why is it significant to me and my decision-making?’

Trust

  1. It must be recognised that trust is a foundational principle in healthcare, and therefore there needs to be pro-active engagement with patients and clinicians to build trust and demonstrate how ethics has been included throughout the design and implementation process of AI tools.

  2. The iToBoS solutions should be treated as a decision-making tool and should continue to be developed, implemented, and deployed in compliance with relevant and up-to-date legislation, regulations, and guidelines.

Clinical Effectiveness

  1. iToBoS solutions must be cautiously and efficiently implemented into existing care pathways to avoid exacerbating existing system inequalities and inefficiencies. This requires input and ongoing support from a variety of stakeholders, including patient advocacy groups, national governments, regulators, and insurers.

  2. As AI tools become increasingly popular in healthcare, the position of clinicians and their concerns and obligations must be considered. Clinicians may need further guidance from their professional regulators, clarity from their clinical indemnifier, and further education and training to safely and efficiently engage with these technologies. In addition to supporting clinicians, these developments will provide guidance and assurance for patients and the general public.