Intro to AI Privacy

There is a known tension between the need to analyze personal data and the need to preserve the privacy of data subjects, especially in the health domain.

Many data protection regulations, including the EU General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), set out strict restrictions and obligations on the collection and processing of personal data.

Many data processing tasks nowadays involve machine learning (ML), including training ML models on personal data. In recent years, several attacks have been developed that are able to infer sensitive information from trained models, including membership inference attacks, model inversion attacks and attribute inference attacks. These attacks may be able to reveal either whether an individual was part of a model’s training set[1], infer certain possibly sensitive properties of the training data[2], or even reconstruct representative samples from the training set[3]. This has led to the conclusion that machine learning models themselves should, in some cases, be considered personal information and need to be protected as such.

In 2019 the British Information Commissioner’s Office published an AI Auditing Framework which specifically mentions purpose limitation and data minimization and many more considerations. In 2020 the European Parliament published a study on the impact of GDPR on artificial intelligence, also mentioning in particular purpose limitation and data minimization. In 2021, UNESCO published a draft Recommendation on the Ethics of Artificial Intelligence, mentioning privacy and data protection, privacy by design and privacy impact assessments as some of the recommended practices, and is currently in the process of designing an ethical impact assessment (EIA) tool for AI. Also in 2021, the European Commission proposed a draft regulation on trust in AI, becoming the first governmental body in the world to issue a draft regulation aimed specifically at the development and use of AI. The National Institute of Standards and Technology NIST (under DoC), on a directive from the American Congress, just published an initial version of an AI risk management framework (RMF).

Recent surveys indicate that organizations are currently struggling with building AI solutions that involve personal data, and that the security and privacy of data for ML, as well as building trustworthy and ethical AI, are some of the greatest challenges they face with machine learning. Moreover, reports predict that privacy-preserving techniques for AI model training will unlock up to 50% more personal data for model training and 70% more AI collaborations in industry.

To deal with this surging need, several areas of research have emerged in recent years:

  1. Privacy risk assessment of models - this is the first step to understanding which models actually pose a privacy risk, enable comparing between model alternatives based on privacy criteria (and not only accuracy), and to prioritize models for further action. Risk assessment can be either theoretical or empirical, however employing a quantitative approach is critical to enable scaling and automation of this complex and time-consuming task.
  2. Privacy-preserving AI technologies – methods for generating ML models that do not leak as much information about their training data and can be more freely used and shared without posing a considerable privacy risk have started to be developed. These include techniques for model anonymization and building models using differential privacy. Model-guided anonymization is based on anonymizing the model’s training data and training the model on the anonymized data. However it performs the anonymization in a way that is guided by an initial model, in a way that is least harmful to the model’s accuracy. Differential privacy is a process where specially crafted noise is added either to the training data itself or to the model training procedure to yield a model which guarantees that no specific sample can be identified within the training data. Each of these methods provides a different tradeoff in terms of privacy guarantee, model accuracy, performance and ease of use.
  3. Compliance with data protection regulations for AI models - as mentioned earlier, ML models are not exempt from data protection principles such as purpose limitation, data minimization and the right to be forgotten. Therefore, specially tailored solutions for applying these principles in the domain of ML are being researched and implemented, both in academia and industry.

To learn more about these important topics, you are welcome to check out our online, freely available course on AI Privacy and Compliance (https://www.ibm.com/training/course/W7129G). It puts AI Privacy in the wider context of trustworthy (or responsible) AI, and covers aspects such as what is privacy for AI models and why is it different from data privacy, privacy risk assessment of models, mitigating privacy risks using technologies such as model anonymization and differential privacy, and additional compliance aspects such as data minimization and the right to be forgotten. It also covers some existing open-source toolkits and how to use them to assess existing models or create privacy-preserving models.

More information on AI privacy technologies is also available at https://aip360.mybluemix.net/.

Abigail Goldsteen, IBM.

[1] Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: IEEE Symposium on Security and Privacy. pp. 3–18. San Jose, CA, USA (October 2017)

[2] Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In: USENIX Security Symposium. pp. 17–32 (2014)

[3] Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: CCS (2015)