Applying Artificial Intelligence Privacy Technology in the Healthcare Domain

The scientific work "Applying Artificial Intelligence Privacy Technology in the Healthcare Domain", supported by iToBoS project, has been published.

Regulations set out strict restrictions on processing personal data. ML models must also adhere to these restrictions, as it may be possible to infer personal information from trained models. In this paper, we demonstrate the use of two novel AI Privacy tools in a real-world healthcare application.

There is a known tension between the need to analyze personal data, and the need to preserve privacy of data subjects, especially in the health domain. Data protection regulations, such as GDPR, set out strict restrictions on the collection and processing of personal data. As personal information may be derived from machine learning (ML) models using inference attacks, models must also adhere to these requirements. Many techniques have been developed recently for privacy in ML models. However, few of them have been applied in real-world settings. We demonstrate the use of two such tools in a real system for early detection of melanoma3: ML model anonymization and data minimization. We demonstrate their incorporation into the system data flow and architecture and show initial results on a representative medical dataset.

Acknowledgments

This work has been supported by the iToBoS project funded by the European Union’s Horizon 2020 research and innovation programme, under grant agreement No 965221.

Find out more at https://ebooks.iospress.nl/doi/10.3233/SHTI220410