This course supported by iToBoS project provides an overview on the concept of AI privacy, which helps in building trust in AI, and explains how open source tools from IBM can help both assess the privacy risk of AI-based solutions, and help them adhere to any relevant privacy requirements.
Learners will start off with an overview of the AI Privacy concept, and then do a deep dive into three IBM open source tool kits: AI Privacy Toolkit, Differential Privacy Library and Adversarial Robustness Toolbox which help assess and create machine learning models that preserve the privacy of their training data and comply with relevant data protection regulations.
The course is aimed at Analytics Leaders, Data Science Leaders, Practicing Data Scientists, Machine Learning Engineers, AI specialists, as well as at anyone with an interest in AI Trust and Privacy having the prerequisite knowledge required.
Students should have a basic understanding of:
- AI/Machine Learning Workflow
- Data Science
Access the course
If your work is related to AI models or machine learning involving information about people, or if you find this field interesting, please check out our course Accomplishing AI Privacy and Compliance with IBM Privacy Toolkits.