The changing landscape of AI regulation

Until a few years ago, regulations and standards existed around the handling and use of certain types of data, including the General Data Protection Regulation (GDPR)[1] in Europe, HIPAA[2] and PCI-DSS[3] in the United States, the Canadian Consumer Privacy Protection Act (CPPA)[4] and many more.

Many data processing tasks nowadays entail complex analysis, such as machine learning and Artificial Intelligence (AI). Recently, regulative authorities, as well as standardization bodies, have started to understand that these laws and regulations around data are not enough, and a whole new landscape of proposed regulations and frameworks specific to AI have started to emerge. And many people see these as game changers.

It started back in 2019 with the British Information Commissioner’s Office AI Auditing Framework[5], whose two main components are governance and accountability; and AI-specific risk areas. It specifically mentions purpose limitation and data minimization, fairness, transparency, accountability and many more considerations.

Then came a study of the European Parliament in 2020 on the impact of GDPR on artificial intelligence[6]. It discusses the tensions and proximities between AI and data protection principles, such as in particular purpose limitation and data minimization, and finds that although AI is not explicitly mentioned in the GPDR, many provisions in the GDPR are relevant to AI.

In April 2020, the Canadian federal government began requiring algorithmic impact assessments for all automated decision-making systems delivered to the federal government.

In 2021, UNESCO (The United Nations Educational, Scientific and Cultural Organization) published a draft Recommendation on the Ethics of Artificial Intelligence[7], mentioning privacy and data protection, privacy by design and privacy impact assessments as some of the recommended practices.

Also in 2021, the European Commission proposed a draft regulation on trust in AI[8], becoming the first governmental body in the world to issue a draft regulation aimed specifically at the development and use of AI, with expected penalties even higher than those imposed by GDPR. The new AI regulation provides a risk-based approach, laying out different risk levels of AI applications, and right-out banning the most high-risk ones, such as those that manipulate human behaviour or allow ‘social scoring' by governments. Other domains such as employment, safety, education, law enforcement and justice are considered high-risk but still allowed, subject to strict obligations, including risk assessment, logging, documentation, human oversight and security. AI systems interacting with users, such as chatbots, are also required to inform users that they are interacting with a machine.

The U.S. administration is also heatedly debating the possibility of U.S. Artificial Intelligence Regulation, with several proposed bills already submitted or being shaped, including the Artificial Intelligence Initiative Act[9]. Several governmental bodies, including the Department of Commerce (D0C), Federal Trade Commission (FTC), Food and Drug Administration (FDA) and National Security Commission and Government Accountability Office (GAO) are working on reports or recommendations for overseeing and/or regulating the use of AI, especially in highly sensitive domains such as health. In 2021, a precedential ruling from the FTC forced an AI company to delete its machine learning models after unlawfully collecting user data[10]. The FTC has published guidance on using Artificial Intelligence and Algorithms[11], calling for AI tools to be transparent, explainable, fair and empirically sound, and is now pursuing federal AI regulation, filing in late 2021 a consideration of a rulemaking process on privacy and artificial intelligence[12].

The National Institute of Standards and Technology NIST (under DoC), on a directive from the American Congress, is currently working on preparing an AI risk management framework (RMF)[13], an initial version of which was just published for comments from the public. The framework is aimed at better managing risks to individuals, organizations, and society associated with AI. The Framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias. It is being built as part of an open and collaborative process, involving stakeholders from the private and public sectors.

Finally, UNESCO is in the process of designing an ethical impact assessment (EIA) tool for AI, the first EIA tool developed by an international organization that is specifically tailored to AI systems, and is currently gathering feedback from developers and users of existing EIA tools, including some from IBM, to help guide the design and development of their tool.

There are many more such efforts and initiatives. For a full list of current AI policies and strategies worldwide see

My main takeaway: “AI protection” is apparently becoming the new “data protection”, and slowly but surely, everyone will need to start to conform to this new norm.

Abigail Goldsteen, IBM.