Skip to main content
Idea
AI for public good

Public policymaking: from AI to decomputing

AI seems to offer many benefits to public policymaking, but it can't address the tricky structural issues that impede actual change.

Written by:
Dan MacQuillan
Date published:
Reading time:
8 minutes

AI in its various forms, both predictive and generative, seems to offer a plethora of benefits to public policymaking, from the analysis of complex data to the optimisation of resources. Indeed, the kinds of actionable abstraction offered by AI, the statistical patterns derived from data that can be used to make decisions or synthesise responses, would seem to be a good fit for both seeing the bigger picture and finding the levers to alter it.

The use of AI means that policy is ‘data-driven’ in a way that implies some element of objectivity, especially when tackling contentious issues such as rationing or replacing public services. The primary risk here, however, is the replacement of actual policymaking by a distorted form of technical action. Despite its claim to assist or even automate the production of policy solutions, AI is incapable of addressing the tricky structural issues that impede actual change. What it offers instead is a performative ‘fix’ that both obscures and amplifies the underlying problems.

Correlations and hallucinations

AI, whether in the form of plain old neural networks or Large Language Models like ChatGPT, learns its patterns from the world as it is. While these algorithms can find a path between messy input data and desired outcomes, the underlying mechanism is one of reductive correlation not causal analysis; there’s nothing based on how things happen, just that certain patterns seem to reoccur. AI’s complexity introduces a fundamental opacity to the link between input and output, making it impossible to be sure exactly why it generated a particular result, removing any route to due process. This is compounded in real-world applications, where AI’s apparently authoritative outputs can become self-fulfilling, a machine learning algorithm labelling a family as ‘troubled’ can set up a feedback loop of interactions between family members and social services. In this way, AI emulates well understood sociological phenomena such as stereotyping and stigmatising but does so at scale. The process of training AI takes the intersectional entanglements of society and culture and distils them into impenetrably large models. Its inferences are giant engines for rehashing the status quo into amenable representations, not for generating genuine insights.

A prerequisite of good policy is that it should be grounded in reality. Yet this connection is severed by AI’s computations, something that has become familiar to most people in the form of ChatGPT’s ‘hallucinations’. The most important thing to realise is that when a Large Language Model makes things up, it isn’t ‘going wrong’ but functioning exactly as designed. Its learned optimisations are founded on emulating language, not understanding it, and whatever actual meaning there might seem to be in the output is something projected into it by us, the users. The same principle applies to any of AI’s predictions or image classifications. Whether AI is being applied at the sharp end to predict welfare fraud, or simply used by a policymaker to ‘chat’ with a trove of prolix policy documents, it degrades the reliability of the results.

Scaling injustice

Evidence already suggests that the entanglement of algorithms with policy solutions leads to the arbitrary scaling of unfairness and cruelty. Scandals abound, from Robodebt in Australia to the ‘Toeslagenaffaire’ of Dutch childcare benefits, all of which could have been prevented by listening to the voices of those affected. But AI introduces epistemic injustice, where people’s capacity to know their own situation is devalued in relation to algorithmic abstractions. While AI, like bureaucracy, is presented as a generalised and goal-oriented form of rational process, it actually produces thoughtlessness; the inability to critique instructions, the lack of reflection on consequences, and a commitment to the belief that the correct ordering is being carried out. Even worse, so-called generative AI offers the additional capacity to simulate broad consultation, whether through the hallucinatory ‘interpretation’ of vast numbers of public submissions or the literal simulation of a virtual and allegedly more diverse public by replacing real people with generative AI avatars. The technocratic approach implemented by AI is the opposite of a mechanism that is responsive to the contingencies of lived experience. AI is never responsible because is not response-able.

Taking AI’s attributes as a whole, its application to policymaking or as a tool of policy will intensify social injustice. AI’s offer to social ordering isn’t the generation of alternative arrangements of power but mechanisms of classification, ranking and exclusion. Every identification by AI of a school pupil ‘at risk of NEET status’ or a disability claim ‘with an elevated risk of fraud’ is the mobilisation of a world view that elevates abstracted misrepresentations over the complexity of lived relations, and does so in the interests of institutions not people. Sedimented in the stark injustices of the status quo, AI’s solutions tend inexorably towards necropolitics, that is, forms of decision-making that modify the distribution of life chances through designations of relative disposability. Diverting people at scale from educational pathways or from the benefits they need to survive, for example, constitutes an algorithmic filter for who is welcome in society and who isn’t.

Unsustainable solutionism

Unfortunately, the pressure on policymaking to adopt AI will be immense because of wider commercial and ideological commitments. The stated belief of both main UK political parties is that AI is the future of economic growth and strategic influence. However misguided this will turn out to be, such a belief impacts the process of policymaking in the here and now. Whereas all public policymaking should have ethics at its heart, the then UK government has downgraded such concerns in favour of the nebulous concept of AI safety, which is concerned with ‘existential threats’.

The net effect is that the mundane ethical impacts of everyday structural injustices, such as access to social security, adequate healthcare or a decent education, are overlooked in favour of planetary scale fears. Moreover, this sci-fi focus carries a conceptual payload of its own, as witnessed by the influence of ideologies in policymaking about AI, such as effective accelerationism (a better society means unconditionally embracing tech) and long-termism (the only thing that matters is getting to artificial general intelligence; other problems like hunger, poverty or disease are secondary). Such perspectives overlook harms to marginalised populations in the present, in favour of a utilitarian moral calculus of imagined future generations.

Equally distorting, in terms of policy, is the scale of AI’s material infrastructure and its implications for the distribution of power and control. It’s widely understood that training so-called foundation AI models (models that can be applied to many different use cases) requires sizes of datasets and amounts of computer processing that are literally off-the-scale. Hosting this activity in hyperscale data centres leads to demands for energy and cooling water that have environmental and climate impacts.

A world in which policymaking and policy solutions are powered by AI is also one which has handed significant leverage to the small handful of companies that can command such resources. Housing policy is usurped when, as happened recently, the Greater London Authority (GLA) had to halt applications for new housing developments in West London because data centres were taking up the available grid capacity. Similarly, the climate aspects of any policymaking are compromised when AI servers are demanding all available renewable energy capacity to sustain the illusion of their green credentials. The embrace of AI is a commitment to extractivism and the transfer of control at a level that supersedes any actual policy.

Policymaking that supports decomputing

Adopting AI for public policymaking would be to submit policy to wider corporate and ideological agendas; those that have already decided the future of civilisation is so-called artificial general intelligence (AGI), those that have decided the best response to structural crisis is to paper over it with AI hype, and those that have concluded that maintaining revenue in a global downturn is best achieved by the substitution of real workers by shoddy AI emulations. The net effect of AI in policymaking would make it more precarious and increase outsourcing and privatisation under the cover of over-hyped technology. This constitutes a form of ‘shock doctrine’, where the sense of urgency generated by an allegedly world-transforming technology is used as an opportunity for corporate capture and to transform social systems in frankly authoritarian directions without reflection or democratic debate.

Rather than asking how AI will pervade policymaking, the focus should be on public policymaking that supports decomputing; that is, a sociotechnical strategy of reduced dependency on computational scale, of maximum participation by affected communities, and of increased recognition that calculative reasoning is no substitute for matters of policy that require reflective and discerning judgement.

AI, as an apparatus of computation, concepts and investments, is the apotheosis of the ‘view from above’, the disembodied abstraction of privileged knowledge that already plagues some forms of policymaking. As generative AI makes clear, it is also a commitment to growth at all costs, to expansion over lived experience. A pivot to decomputing is a way to reassert the value of situated knowledge and of context over scale. In contrast to AI’s predictions and simulations, our shared reality is complex and entangled and you can’t determine the future simply by using theory. That doesn’t mean we can’t advance towards desired goals like social justice and a just transition, but decomputing suggests that we approach them in ways that are both iterative and participatory. The real work of restructuring reorients the focus from toxic tech to developing techniques for redistributing social power, such as people’s councils and popular assemblies, and for enrolling them in the processes of prefigurative change.

About the author

Dr Dan McQuillan is Lecturer in Creative and Social Computing at Goldsmiths University of London, and author of the book 'Resisting AI - An Anti-fascist Approach to Artificial Intelligence' published in 2022 by Bristol University Press.

Animation of people going up or down escalators with binary code in the back ground

This idea is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good