What does Artificial Intelligence Mean for the Future of Democratic Society?
Examining the societal impact of AI and whether human rights can respond
Governments around the world are already using AI to help make important decisions that affect us all. This data-driven approach can offer key benefits, but it also relies on the ever-increasing collection of data on all aspects of our personal and public lives, representing both a step change in the information the state holds on us all, and a transformation in how that information is used.
This project looks at the unintended consequences associated with this level of surveillance – the impact on how individuals develop their identity and how democratic society flourishes. Will a chilling effect emerge that changes individual behaviour? And what might the impact of this be? Will the knowledge that our activities are tracked and then translated into government decisions affect how we, for example, explore our sexual identity or develop our political opinions? Will we all be pushed towards the status quo in fear of the consequences of standing out? Ultimately the project seeks to examine what the effect of this will be on the well-being of our democracy.
The project is interdisciplinary, working across human rights law, sociology and philosophy. A key component of the project examines lived experience of surveillance in the context of wider discussions about how individuals and societies flourish. This research will then be used to inform the development of international human rights law, in order to move it from the analogue to the digital age.

This project is funded by a four year UKRI Future Leaders Fellowship award of £1,000,000+
The project in more detail:
This research examines the impacts that States’ use of AI in decision making processes has on how individuals and societies evolve and develop and what this means for democratic society. Understanding these impacts is essential so that effective guidance can be developed that allows States to take advantage of the significant potential inherent in AI, while protecting those factors essential to a functioning democracy and preventing human rights harm.
AI has the power to radically transform State activity, redefining our understanding of how a State functions and delivers services, and how it interacts with its citizens. A key development in this regard is the incorporation of AI tools into State decision-making processes. To be effective, these tools are dependent upon significantly increased surveillance by State and non-State actors: the data obtained through surveillance is subject to analysis using AI in order to make individually-tailored decisions. This represents a step-change in terms of the level of insight the State has into individuals’ day-to-day lives, and their ability to use this information to determine that individual’s life choices. This may exert a profound impact on how individuals, and society as a whole, develops. Will individuals be afraid to experiment, or to seek out alternative ideas or ways of life, because they are worried that they will be categorised on this basis and their future life choices restricted? Will this in turn lead to the stagnation of democratic society?
AI has enormous potential. It can be used to transform how a State delivers services, and if used appropriately can make a real contribution to the development of society, and the protection of human rights. However, it is imperative that the broader impacts of AI on individuals and society be understood before AI becomes pervasive in decision-making processes, so that appropriate regulatory and policy responses can be developed, and human rights protections ensures.
This research focuses on the inadvertent, or unintentional, impacts associated with State adoption of AI technologies. There is, of course, clear potential for AI to be misused for repressive purposes. Of interest here, however, is States’ use of AI when deployed in pursuit of legitimate objectives. The unintended consequences associated with States’ uses of AI under these circumstances may be less visible but equally dramatic.
Human rights law provides the framework underpinning research. Although it must be reconceptualised to respond to the digital age (a key research objective) it provides the most effective means of identifying harm, resolving competing interests, and providing regulatory guidance.
The principal objective underpinning this inter-disciplinary research is the development of future-oriented human rights approaches to regulate States’ use of AI in decision making processes, and to ensure that AI serves, rather than undermines, societal objectives. To do so will require in-depth research across law, human rights, philosophy, and sociology. Initial research will investigate factors essential to individual and societal development, how these relate to democratic functioning, and how they are impacted by States’ use of AI. Human rights law itself must then be re-conceptualised, to ensure that it is capable of engaging with these factors, and protecting them in the digital age.
State agencies are beginning to incorporate AI technologies, and the utilisation of AI will increase exponentially over the coming years. Surveillance and AI-assisted analytical tools are deployed across all areas of State activity, from social welfare, to child protection, and healthcare. To examine the democratic effects where they are most visible in the short term, however, research will initially focus on State activity related to law enforcement and counter-terrorism, examining the use of AI by police and intelligence agencies.