Skip to main content
Idea
AI for public good

AI in public policy: history says balance scepticism and awe

We must treat AI as one powerful, but flawed, tool if we are to avoid the old trap of over confidence with new technology.

Written by:
Rory Weal
Date published:
Reading time:
8 minutes

135 years ago, a wealthy businessman in the shipping trade set out on what seemed to many of his contemporaries like an absurd undertaking. Charles Booth assembled a team of researchers, and armed with paper and pencils they walked London, street by street, to map the prevalence and nature of poverty.

After 7 years of research, 17 volumes of analysis, and hundreds of conversations with Londoners, the Inquiry into Life and Labour in London (1886-1903) was published. This would prove to be a landmark event in the early field of social science research and in evidence-based social policy. Alongside contemporaries such as Benjamin Seebohm Rowntree, Booth’s work influenced a raft of public policies ranging from local sanitation policies to national Old Age Pensions, and free school meals on the eve of the First World War.

While Booth’s study was remarkable for its time in the vast quantity of data gathered, today artificial intelligence (AI) allows us to utilise data on a scale many orders of magnitude greater. AI has been posited by some as a silver bullet to making public policymaking more productive, efficient and evidence-based, and others as a dystopian harbinger of government by algorithm. The implication of both arguments is that we are in completely uncharted waters.

However, for all its genuine newness, AI also reveals a much older reality from Booth’s time. Evidence and data when used effectively can help us to better understand and tackle the inter-related root causes of social issues, but unshackled techno-hubris can encourage the worst excesses of our deficit-led, top-down, and paternalistic state.

New possibilities or shock of the old?

The potential of AI to improve policymaking lies in its capacity to make predictions, classifications, and synthesize vast quantities of data quickly, with limited human effort. These capacities can be deployed at several distinct stages of the policymaking process: at issue identification, policy development and selection, policy implementation, and evaluation. At each of these stages AI could be used to identify connections between distinct experiences, allowing us to spot trends and predict and monitor the impact of various policies in real time.

So, what could go wrong with a tool that saves time, makes supposedly ‘objective’ decisions, and promises to automate vast swathes of government bureaucracy? Unfortunately, real world application tells us, quite a lot. As I wrote for Renewal, social security is one arena where AI has been rolled out at policy implementation stage in several countries. This has occurred at pace, and the results have been deeply troubling. From Michigan to the Netherlands, AI trained on flawed data, labelling errors, and the unaccountability of ‘black box’ decisions, resulted in thousands falsely accused of fraud, with systems found to be in breach of claimants’ human rights, pushing many into poverty. While the UK is a later adopter, attempts by the Department of Work and Pensions (DWP) to use AI for screening for fraud, raises concerns we are on a similar trajectory.

The social harm created in these cases has its root in an infatuation with claims that AI can bypass messy political questions with ‘objective’ decision-making. But the bias of the data these algorithms are fed, and the exclusionary social policy agendas they serve, has created a raft of terrible outcomes. Averting these outcomes, while still considering a legitimate role for AI, must start with situating these tools in broader histories of technology, data and evidence.

The debate about almost every new technology since the industrial revolution has been framed as a decisive break with the past, for good or for ill. In his 2006 book Shock of the Old, technology historian David Edgerton provided a powerful critique of this long-standing tendency, driven by the interests of the promoters of new technologies, to obsess over the new and neglect the ongoing relevance of the old. The irony is that even our obsession with the ‘newness’ of AI, is not in fact new. Edgerton writes, ‘tech-utopianism and dystopianism are two sides of the same much debased coin’, describing both as outdated mindsets rooted in twentieth-century intellectual thought.

How could we view the role of AI in policymaking without such novelty-bias? We would probably start with a greater emphasis on how many of AI’s opportunities and risks are more pronounced versions of data and evidence use in policymaking more broadly. To properly understand this, looking to history can help.

AI risks and the deficit-led approach to policymaking

The risks of using biased data in policymaking are nothing new. Booth’s study rationalised classes in a variety of categories, the lowest class is categorised as ‘vicious, semi-criminal’. The work is also replete with references to ‘rough’ and ‘savage’ people.

If one were to unthinkingly deploy Booth’s work in designing a criminal justice policy, dire outcomes are not hard to imagine. Instead, it was deployed by a reforming New Liberal government with a political mandate and priorities to build an early form of what would become the welfare state. Similarly, AI is only as good as the data it is fed and the political agenda it is mobilised in service of.

We know that technology has a way of seducing decision-makers, and AI is particularly seductive because of its ‘black box’ internal logic. This makes it incredibly hard to understand or hold to account, providing a dangerous veneer of objectivity. This could combine in troubling ways with Whitehall policy agendas which look at social problems primarily through a lens of cost and resource management, finding ever more innovative ways to deny people support, reduce entitlements, and push civil society to pick up the slack.

Tasking AI with deficit-led policy implementation and operational tasks, such as identifying non-compliance in welfare applications or predicting re-offending, are arenas where risks are acutely high and require a high degree of critical scepticism. In contrast, utilising AI not to gatekeep but to connect and include appears to present fewer risks. Considering the potential of AI in healthcare from faster diagnoses to clinical note taking, many of the same privacy and data governance concerns exist, but the acute social harms appear less starkly.

The potential of AI to support joined-up policy

Just as the risks of AI lie further in our past, the possibilities it offers for policymaking have historical precedent. The great contribution of Booth’s study lay not only in its scale, but in the triangulation of information to illuminate the interconnecting features of the lives of people in poverty. This ranged from poor quality housing, health and illness, to the social lives and communities they lived in. Whilst this was highly original for its time, it is now a staple of social research.

AI can perform similar feats but in real time, without needing to test a specific hypothesis. This is particularly valuable when it comes to identifying issues and selecting solutions. At a basic level, this can include using Large Language Models, such as ChatGPT, to produce assessments of the literature. Similarly, AI tools can use integrated datasets to identify connections that humans or researchers may miss, and predict or test the impact of policies on complex problems. If done with appropriate safeguards, and a rigorous testing regime, the possibility exists to jump-start the kind of joined-up policymaking across Whitehall departments that has been sorely lacking. This could support the much-vaunted drive towards ‘mission-led’ government, bypassing the often-arbitrary structures of government which treat housing, health, and community cohesion in isolation. One could envisage mission boards supported by linked datasets, which allow policies to be selected and monitored across departmental silos. If clear systemic goals were set, such as ending poverty or reducing river pollution, with the right checks, the possibilities are clear.

Using AI to operationalise policies, especially where they relate to marginalised or at-risk groups, should proceed with a high degree of caution. We should aim for maximum possible transparency and close involvement of civil society and lived experience groups, who can often spot unintended consequences better than policymakers. Where this transparency is purported to undermine the policy objective, such as when the DWP claimed transparency over the workings of its fraud detection algorithms would allow people to game the system, we should put the brakes on. Finally, taking a ‘test and learn’ approach, in which real-time data is utilised to constantly evaluate impacts, could allow for a more flexible approach to cross government policy delivery.

For any of this to succeed, it needs to support a publicly stated political agenda which seeks to overturn inequalities, not entrench them. When asked to perform deficit-led tasks, AI is an expert at baking-in exclusion. It is not just the effectiveness of the tool, but the risks implied by the question it is asked, that matter.

This is also why policymaking must continue to prioritise a wider pool of research and evidence techniques, such as innovative qualitative and participatory research methods, to safeguard against exclusion. These efforts should have the objective to bolster (and not replace) the availability of human-centred support, which is integral to building trusting relationships that are too often neglected in public services.

Resisting ‘age of Sputnik’ tendencies

Between the extremes, a middle course is needed. Politics matters, and just like in Booth’s era, we should not delude ourselves that we can farm out our moral choices and assessments on difficult trade-offs to datasets. We should remember AI as just one of a suite of data and evidence-led tools at our disposal, each with their own flaws and limitations.

If we only spot the shiny new tech when making policy, we’ll miss what we already know. Edgerton quotes an old Soviet joke: An inventor goes to the Ministry and says, ‘I have invented a new buttonholing machine for our clothing industry’. ‘Comrade,’ says the new minister ‘we have no use for your machine: don’t you realise this is the age of Sputnik?’

We would do well to resist the tendency to think of this as a mesmeric ‘age of AI’, and treat its deployment soberly as one powerful, but flawed, tool among many which go back to Booth’s time. Only then can we continue to make improvements in how policymaking can be used to address complex social issues, while avoiding the failures that have befallen hubristic approaches to technology from our past.

About the author

Rory Weal has worked on issues relating to inequality, housing and poverty for UK charities. He is a Churchill Fellow and Thouron Scholar specialising in international approaches to social injustice.

Animation of people going up or down escalators with binary code in the back ground

This idea is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good