AI in government: for whom, by whom?

2019-04-07-graphic-city-night

While AI may make cities’ planning and governance more efficient, it needs to be regulated to ensure it complies with legal rights and protections.

Algorithms, machine learning and, more broadly, artificial intelligence (AI) promise to introduce astounding levels of efficiencies to cities’ monitoring of citizens and infrastructure, their planning and governance, and their service response and decision-making. While we have yet to automate all of our planning and resource allocation decisions, advances in machine learning and neural networks, as well as our ability to collect data through even more network sensors, are bringing automation at least to certain parts of our civic problem-solving processes. One well known and somewhat contentious example is the use of predictive crime analytics to dispatch police units proactively, in anticipation of crime incidents. These tools may be branded, and even sold, under that catch-all name of artificial intelligence and packaged in smart city solutions such as the NVIDIA Metropolis platform.

However optimistic we are about the potential for AI and algorithms to “do good,” their positive social impact remains far from guaranteed without adequate regulation to ensure social accountability, reduction of harm, and compliance with legal rights and protections.

The use of technologies, particularly those that automate decision-making, is a problem not just for government but for civil society, communities and the citizens affected by such technologies. It is also of great import to civil society organizations that work on behalf of citizens to create better living environments. Arguably, without access to data, knowledge of algorithms and sound regulatory expertise, civil society (and even governments) will struggle to engage with and influence future cities run by algorithms. But bringing in computer scientists, legal experts, mathematicians and software engineers just to help civil society understand a piece of software is simply not feasible. However, organizations that do not expand their expertise beyond traditional realms of knowledge to include open data will see their power to negotiate the next regulatory paradigm eroded.

Others have already critiqued the concept of the smart city by raising issues such as data ownership (when smart city solutions are owned by the private sector) and legal liability (when automated decisions result in harm).

The City of New York recently has made a positive step in this area, with a Bill mandating the creation of a task force to make “recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems.” This task force (notably composed of experts from a variety of sectors including government, academia, the private sector and civil society) is to investigate potential bias resulting from algorithms used in city departments. Transparency in code and auditing of algorithmic outputs will not guarantee success. It can be very resource intensive to test systems that utilize massive datasets, it and would require robust data sampling to test whether they are working as intended.

Bias in data that cities collect as inputs into software platforms and decision support systems can lead to bias in the resource allocation decisions that come out of these systems. For instance, data can be spatially biased: neighbourhoods with less Internet access or lower income and education levels may file fewer requests through a 311 customer service centre and may therefore receive less attentive infrastructure maintenance. If ethnic and income groups are clustered in such neighbourhoods, and if the city’s processes rely on citizen reporting, entire minority groups can be underserved. Accountability in algorithms therefore involves regulation of their data inputs, to ensure that algorithms make decisions that are representative of the entire city.

Algorithms used in government affect everybody and are a concern for politicians, business, civil society and citizens. “We,” whoever we may be, may likely feel comfortable with software automation only if we are brought to the table to help define the rules. These rules are already being addressed by government. Algorithms have been on the radar for Canadian governments for a number of years, with consent identified by the Office of the Privacy Commissioner of Canada as one key regulatory mechanism.

One step we can take is to create clear regulatory structures and legal frameworks over the data we knowingly use and share. Control over data (and their use) will also help determine outcomes. Recently, OpenNorth has worked with the Partenariat du Quartier des Spectacles, a not-for-profit organization that brings together some 60 organizations who are active in the entertainment district of Montreal. The goal of the project was to promote data sharing among the various enterprises and nonprofit organizations, including businesses and cultural venues. The partners we worked with were initially wary of the whole idea, and with good cause. What are the legal ramifications of sharing data? What if sharing data proves detrimental to competition? With enough facilitation, a common agreement was reached on data governance, which resulted in increased understanding and confidence in the mutual benefits of responsible sharing of data, and in its stronger potential when implemented collectively. We believe the same approach is applicable to the regulation of algorithms used by government.

Regulations are important tools to enable and limit activity. To give ourselves the space to examine the potential opportunities and failures of a given regulatory system or technologies, it may be useful to take an experimental approach to developing them. It could start through regulatory “sandboxes” that provide a safe and controlled environment within which to test new algorithms, with government and citizen oversight. For example, the Monetary Authority of Singapore has established a controlled environment for financial technology, which is not surprising given the widespread use and impact of trading algorithms in financial markets.

The debate over the use of AI and algorithms goes beyond just the questions of control and accountability. We face a struggle over who gets to influence and shape our urban environments of the future. Arguably, without an understanding of the processes and algorithms we use in current and future forms of city governance, we (citizens, civil society and government) will remain simple, disempowered end-users of software. The important question is not whether algorithms will do good, but rather who they will serve and who will get to take part in shaping them.

Originally posted on Policy Options