Round table on Ethical AI

iToBoS project recently took part in a round table discussion on the ethical use of artificial intelligence (AI) in city infrastructure and planning, led by the Economics Program at the Center for Strategic and International Studies (CSIS).

Many different types of entities participated, including law firms, standardization bodies, city representatives, private sector companies and even representatives from the Japanese government.

Many concerns regarding the use of AI in this type of system were discussed, ranging from cybersecurity threats, bias and transparency of the models, and of course privacy, data sharing practices and controls over the use of data. The data minimization principle was also mentioned specifically, raising concerns regarding potential over-collection and over-use of sensitive information. Another concern was around the privatization of data and systems and the growing dependence on collaborations with third parties and multiple vendors and stakeholders, which may lead to attempts to evade accountability.

Another big concern that was raised is the issue of domain shift or geographical bias. Due to limited resources, AI systems tend to be developed with limited training data, which is typically confined to a certain population or area. This can easily lead to bias when applying them to different populations or geographic locations, and existing standards seem to be lacking.

Current mitigation strategies such as audits and AI registries were mentioned, with the drawback of these being they are manual, time-consuming processes that tend to only be performed once, whereas AI systems are dynamic in nature, and therefore ongoing monitoring would be a preferrable approach. The need for standardization as opposed to proprietary solutions was also stressed.

New government mandates to create AI regulations or frameworks were discussed extensively. The EU Proposal for a Regulation on Artificial Intelligence and the NIST AI Risk Management Framework, which we will describe in more detail in a later post, were stated as possible game changers in this market. The difficulty in striking a balance between innovation and human rights was raised, most participants agreeing that taking a risk based approach, as opposed to very rigorous and specific guidelines, would be a desirable approach for governmental oversight, and that example should be taken from domains that already use a risk-based approach such as medical and pharma.

Finally, the need to use AI not only to mimic or replace human behavior but rather to improve on it and help advance the status quo was brought up, which is very lacking in existing frameworks.

Almost all of these topics are relevant to many AI applications and not just transportation or smart cities and have particular relevance in the healthcare domain.

Abigail Goldsteen, IBM.