Skip to main content
Idea
AI for public good

Will AI replace policymakers?

Augmenting policymakers' intelligence could be a significant asset in addressing complex global challenges.

Written by:
Dr María Pérez-Ortiz
Date published:
Reading time:
8 minutes

Would you vote for an algorithm? More specifically, would you consider reducing the number of MPs and giving those seats to an AI algorithm that would have access to your data to maximise your interests? This was one of the questions posed to over 2,500 citizens from 11 countries in a 2020 survey. Results revealed that most Europeans (51%) supported reducing the number of parliamentarians in their countries and replacing them with an AI algorithm.

This question feels ever more pertinent given the advent of AI tools such as large language models which, at least superficially, appear capable of performing any language task imaginable: drafting contracts, composing poems, suggesting travel destinations, and even generating preliminary text for government policy documents (see AI tool ‘Redbox’ used for ministerial efficiency in the UK).

As such, there is considerable speculation about how AI might transform fields including education, software development, and journalism. But what about AI’s potential impact on the complex machinery of government and policy?

The rise of algorithms in policymaking

More than ever, the question of whether AI could support policymaking has arisen in governments around the globe. A report in 2023 showed over a third (37%) of UK government bodies were actively using AI, and a further 37% were actively piloting (25%) or planning (11%) use of AI.

One reason for this is that policymakers are facing rapidly changing, complex, global and interconnected challenges, which are difficult to understand and tackle without relevant datasets, scientific evidence and scenario-analysis tools. Consider climate change, for example, where climate scientists rely on computer simulations to forecast future climate scenarios and inform policy decisions, as evidenced by the consistent use of these tools in the last 20 years of Intergovernmental Panel on Climate Change (IPCC) reports. These simulations involve complex calculations often beyond the cognitive capacities of human minds. In this same line, the Covid-19 pandemic showcased how humans can struggle to imagine the effects of exponential growth of a virus, necessitating computational simulations to fully comprehend its potential impacts and the effects of different policies. These computational simulations are a way of augmenting our mind, helping us to understand the potential consequences of our actions and responsibly envision and construct the future.

Historically, scientists built these simulations. However, AI has recently started to take on this role and further expand the uses of algorithms in the policy arena. Throughout the policy making cycle, AI applications abound: from analysing vast amounts of social media data to identify and tackle public concerns, and simulating policy instruments and their broader implications, to surfacing overlooked research papers that can inform decisions and optimising urban planning and evaluating policies in real time.

But what exactly are the capabilities and limitations of AI in the policy domain? To explore this, we must first take a step back and discuss the general capabilities of AI.

Machines that think

In 1950, Alan Turing speculated about the possibility of creating thinking machines. He proposed the Turing Test, also known as the imitation game. The test was defined as a measure of a machine's ability to exhibit intelligent behaviour indistinguishable from that of a human. The test involves a human judge engaging in a conversation with both a human and a machine. If the judge is unable to reliably distinguish the machine from the human based solely on the conversation, the machine is said to have passed the test, demonstrating a form of intelligence.

Seventy years later, machines are closer than ever to passing this test. Modern AI systems can mimic language, sparking concerns about their potential to eventually replace humans in various work settings, from manual labour to white-collar tasks. Yet, as we continue to successfully design machines that can conquer the imitation game, it becomes clear that imitating does not equate to thinking. Indeed, machines still struggle with more structured cognitive reasoning tests.

Differences between human and artificial intelligence

AI currently represents a fundamentally different type of intelligence than that of humans. AI doesn't tire or get distracted; it can process vast amounts of data; perform fast searches in databases and execute complex mathematical computations that optimise processes efficiently. However, AI usually lacks common sense and reasoning, leading to safety concerns such as exacerbating stereotypes or bias already present in data. AI technologies require large amounts of data to train models, and the quality and availability of data are crucial (see for example the art exhibition called ‘The library of missing datasets’ which aims to highlight which data is not collected around the world). Most current AI systems require increasingly high energy demands compared to the frugality of a human brain.

In contrast, human intelligence is broader and more nuanced. Humans can learn from merely a few examples and generalize their learning to new domains. Humans can judge novel and changing situations and quickly shift from short-term to long-term strategies. We are imaginative, creative, social, intuitive, and empathetic, embodying a more integrative notion of intelligence that AI currently struggles to replicate.

Can machines help humans think?

One of the most ambitious goals of AI has always been to support decision making. ‘Technologies to think with’, Professor Yvonne Rogers called it in her Robin Milner Medal 2022 lecture at the Royal Society. A first step towards truly validating machine thinking: can machines help humans to think – augment our intelligence?

By focusing on augmentation rather than replacement, we could harness the strengths of both types of intelligence. Viewing AI as a tool to enhance our human capabilities rather than supplant them, is not only safer but also more advantageous.

We know AI can make efficient decisions in very well-defined scenarios. For example, in games like Go such tools can help professional players to better strategise. But could AI assist people in one of the most complex decision engines: public policymaking?

Policymaking needs augmented intelligence

Consider how well-defined a game of chess is compared to the myriad constraints and considerations one can encounter while crafting a policy. Such a real-world open-ended setting demands the ability to anticipate and adapt to sudden changes and uncertainty while simultaneously crafting a future strategy. Most importantly, it also requires assessing social, political and ethical considerations, all of which are hard to formulate into a mathematical equation that AI technologies could optimise for. Consider for example how you could build a function that best captures human values, you will soon realise that it is not a trivial question.

While AI may excel within well-defined environments, our human ability to reason and strategize across novel and unexpected scenarios means that AI is very far from being able to automate the whole policymaking cycle. Human intelligence is essential in many policy tasks. Overall, policy design requires analytical and strategic intelligence, both of which AI currently excels at. Yet it also requires building consensus, effective negotiation, evaluating information critically, questioning assumptions, making well-reasoned decisions, understanding political contexts and considering ethical implications. Therefore, critical thinking and many other types of emotional, creative, social and ethical intelligence are at play.

However, AI could still serve as an invaluable support tool. AI can complement our cognitive abilities further by analysing patterns in vast datasets, constructing so-called digital twins, which simulate physical and/or social phenomena that can help to test different scenarios and their consequences and risks. As an example, one of the long-term benefits of integrating AI tools into policymaking could be its potential to dismantle silos, such as education, health, and labour, that traditionally constrain government policies and processes. These silos act as barriers, separating related datasets that could yield better, broader insights if combined. By expanding the scale and diversity of information available to decision-makers, AI could enable governments to address problems more comprehensively.

Human-centred AI can support policymakers

AI in policymaking should therefore be conceived as an assistive tool, an empowering exoskeleton for policymakers that aids in navigating complex decisions, envisioning desirable futures, raising patterns in data to investigate, and assessing the risks and impacts of simulated policies.

Such an approach to AI integration can be defined as human-centred. Human-centred technology seeks to empower users, communities and society, placing them at its core. A human-centred approach to AI goes beyond human oversight of AI systems (so called human-in-the-loop); instead, it starts with users' needs and challenges, which become the cornerstone of the technological design.

This, however, is still far from the approach often taken in AI development, in which the emphasis is put exclusively on the performance of the tool, whilst disregarding the needs of the users. For example, many AI applications in healthcare have now surpassed human abilities, such as utilising AI to detect melanoma. However, tensions remain about the impact of AI on clinical skills with some professionals raising concerns about deskilling in the medical community.

Consider these 2 design choices: Technology 1 analyses images of skin lesions and predicts with high accuracy if they are benign or cancerous. Technology 2 does the same but informs the dermatologist of the probabilities of each case, highlighting in the image the most important areas for the decision and the levels of (un)certainty. This second tool also brings up relevant literature and similar examples from the database. As opposed to the first case, in which the doctor may need to blindly follow the output of the algorithm thus not engaging their medical knowledge, the second design gives the doctor a chance to validate the algorithmic decision, deciding whether to trust it based on wider evidence. It also gives the patient the right to an explanation for any decisions made.

The difference between these 2 examples is the human-centred features of Technology 2. While Technology 1 marginalises the needs and challenges of both the doctor and the patient, Technology 2 puts these at the core of the design. In this and many other domains where AI technologies could potentially be deployed, we need to be asking: what are the current needs and challenges that users face? How could we build AI technologies that truly support those?

AI should never replace policymakers, yet responsibly augmenting their intelligence could become a significant asset in addressing complex global challenges. While human and artificial intelligences differ greatly, there is great potential in the combination of their strengths. Imagine what we could achieve if AI technologies were intentionally designed to complement and enhance our own intelligence, enabling us to better address the complex socio-environmental challenges facing policymakers and humanity today.

About the author

Dr María Pérez Ortiz is Associate Professor of Artificial Intelligence for Sustainable Development at University College London.

Animation of people going up or down escalators with binary code in the back ground

This idea is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good