Anticipating and Responding to AI’s Unintended Consequences

This is the first long-term project to examine AI’s ‘unintended consequences’: the indirect (but impactful) harm associated with AI deployments. The objective is to develop a more holistic understanding of AI’s overall impacts and harms, thereby facilitating more effective human rights-based decision-making, to inform legal and policy approaches to the development and deployment of AI. The overall project is built around two overarching research strands.

First, is an empirical examination into AI’s ‘unintended consequences’, specifically unintended human rights and societal harms. Phase 1 of the FLF focused on understanding the ‘chilling effects’ of AI-linked surveillance (such as facial recognition technology); if, and how, individuals modify their behaviour in light of surveillance, impacting their ability to freely develop their identity and to participate in political processes. Significant empirical work has been undertaken, and Phase 2 will further develop this analysis. The next phase of research aims to get ahead of the curve by anticipating the likely direction of travel with respect to governmental uses of AI in decision-making processes. This centres around the development of a computer model to simulate a future interconnected and interdependent AI decision-making ecosystem (where outputs from AI-assisted decisions, say relating to social welfare, are used to update an individuals’ profile, which then feeds into a range of further AI-assisted decisions – relating to health care, child protection, criminal justice, etc. – with these outputs in turn informing additional future decisions). The concern is that such an ecosystem may exacerbate existing discrimination, and may – over time – contribute to stratification within society. The testing made possible by this model provides a rare opportunity to anticipate and plan for, rather than react to, AI developments. This work will be conducted in partnership with Open Lab Athens, Lighthouse Reports and Amnesty International. The computer model will be made available open source, so that organisations around the world can adapt it to replicate local systems, and to inform advocacy and policy approaches.

Second, this knowledge is used to rethink existing approaches to international human rights law, to adapt to the AI era. This adaptation is key to the development of appropriate legal and policy responses, and is explicitly interdisciplinary. A key focus is on better incorporating surveillance-related chilling effects, and protections for the free development of individuals’ identity (including the concept of ‘identity as a whole’), into decision-making processes. This up-to-date approach to human rights law is centred around evidence-based assessments of potential utility and potential harm.

Lighthouse Reports and Amnesty International (independently/in collaboration) will collaborate on investigations into existing AI systems, and the ‘AI ecosystem’ model, and develop linked reporting and advocacy. The open source and modifiable nature of the computer model will facilitate impact, as it can be used by civil society and other actors to conduct their own research, investigations, and advocacy. This pursues a specific social justice objective, by making cutting-edge technology accessible. The continued deployment of biometric surveillance technologies (such as facial recognition) and ever-increasing digital surveillance, ensures that the project’s chilling effects research remains strategically relevant. Work to secure impact in this area will continue with project partner, the UN Special Rapporteur on Freedom of Assembly.

This project is a continuation of the original four year project ‘What does Artificial Intelligence Mean for the Future of Democratic Society?’, with further details below.

This project is funded by a seven year UKRI Future Leaders Fellowship award of £2,000,000+

What does Artificial Intelligence Mean for the Future of Democratic Society?

Examining the societal impact of AI and whether human rights can respond 

Governments around the world are already using AI to help make important decisions that affect us all. This data-driven approach can offer key benefits, but it also relies on the ever-increasing collection of data on all aspects of our personal and public lives, representing both a step change in the information the state holds on us all, and a transformation in how that information is used.

This project looks at the unintended consequences associated with this level of surveillance – the impact on how individuals develop their identity and how democratic society flourishes. Will a chilling effect emerge that changes individual behaviour? And what might the impact of this be? Will the knowledge that our activities are tracked and then translated into government decisions affect how we, for example, explore our sexual identity or develop our political opinions? Will we all be pushed towards the status quo in fear of the consequences of standing out? Ultimately the project seeks to examine what the effect of this will be on the well-being of our democracy.

The project is interdisciplinary, working across human rights law, sociology and philosophy. A key component of the project examines lived experience of surveillance in the context of wider discussions about how individuals and societies flourish. This research will then be used to inform the development of international human rights law, in order to move it from the analogue to the digital age.

The project in more detail:

This research examines the impacts that States’ use of AI in decision making processes has on how individuals and societies evolve and develop and what this means for democratic society. Understanding these impacts is essential so that effective guidance can be developed that allows States to take advantage of the significant potential inherent in AI, while protecting those factors essential to a functioning democracy and preventing human rights harm.

AI has the power to radically transform State activity, redefining our understanding of how a State functions and delivers services, and how it interacts with its citizens. A key development in this regard is the incorporation of AI tools into State decision-making processes. To be effective, these tools are dependent upon significantly increased surveillance by State and non-State actors: the data obtained through surveillance is subject to analysis using AI in order to make individually-tailored decisions. This represents a step-change in terms of the level of insight the State has into individuals’ day-to-day lives, and their ability to use this information to determine that individual’s life choices. This may exert a profound impact on how individuals, and society as a whole, develops. Will individuals be afraid to experiment, or to seek out alternative ideas or ways of life, because they are worried that they will be categorised on this basis and their future life choices restricted? Will this in turn lead to the stagnation of democratic society?

AI has enormous potential. It can be used to transform how a State delivers services, and if used appropriately can make a real contribution to the development of society, and the protection of human rights. However, it is imperative that the broader impacts of AI on individuals and society be understood before AI becomes pervasive in decision-making processes, so that appropriate regulatory and policy responses can be developed, and human rights protections ensures.

This research focuses on the inadvertent, or unintentional, impacts associated with State adoption of AI technologies. There is, of course, clear potential for AI to be misused for repressive purposes. Of interest here, however, is States’ use of AI when deployed in pursuit of legitimate objectives. The unintended consequences associated with States’ uses of AI under these circumstances may be less visible but equally dramatic. 

Human rights law provides the framework underpinning research. Although it must be reconceptualised to respond to the digital age (a key research objective) it provides the most effective means of identifying harm, resolving competing interests, and providing regulatory guidance.

The principal objective underpinning this inter-disciplinary research is the development of future-oriented human rights approaches to regulate States’ use of AI in decision making processes, and to ensure that AI serves, rather than undermines, societal objectives. To do so will require in-depth research across law, human rights, philosophy, and sociology. Initial research will investigate factors essential to individual and societal development, how these relate to democratic functioning, and how they are impacted by States’ use of AI. Human rights law itself must then be re-conceptualised, to ensure that it is capable of engaging with these factors, and protecting them in the digital age.

State agencies are beginning to incorporate AI technologies, and the utilisation of AI will increase exponentially over the coming years. Surveillance and AI-assisted analytical tools are deployed across all areas of State activity, from social welfare, to child protection, and healthcare. To examine the democratic effects where they are most visible in the short term, however, research will initially focus on State activity related to law enforcement and counter-terrorism, examining the use of AI by police and intelligence agencies.