Skip to main content
Idea
AI for public good

Definitions, digital, and distance: on AI and policymaking

Human decisions will drive the deployment of technology in policymaking, for good and ill. Better policymaking should make lives better for humans, which is why we should listen to them.

Written by:
Gavin Freeguard
Date published:
Reading time:
7 minutes

Try finding a definition of ‘policy’ on GOV.UK and you may struggle.

The Policy Profession website gives us activities, specialisms, subjects, standards, a system and a career model, but no succinct definition of ‘policy’. The Public Policy Design blog admits it is ‘notoriously difficult to explain what policy is and how it is made’.

We have to go back to a 2012 blogpost from the Government Digital Service (GDS)1, which (after its own journey through indefinite definitions) settles on ‘statements of the government's position, intent or action’. The Service Manual (a GDS innovation) adds that ‘Policy teams design, develop and propose solutions to help meet ministerial objectives based on research and evidence’.

Defining the problem

This definitional diversion is (hopefully) more than just a hackneyed introduction (and entertaining illustration of just how ambiguous this concept, that occupies 7% of the civil service workforce, is). It should root us in first principles; with policy defined, we should be able to define our problem. Our first question is less, ‘to what extent can AI improve public policymaking?’, but ‘what is currently wrong with policymaking?’, and then, ‘is AI able to help?’.

Ask those in and around policymaking about the problems and you’ll get a list likely to include:

  • the practice not having changed in decades (or centuries)
  • it being an opaque ‘dark art’ with little transparency
  • defaulting to easily accessible stakeholders and evidence
  • a separation between policy and delivery (and digital and other disciplines), and failure to recognise the need for agility and feedback as opposed to distinct stages
  • the challenges in measuring or evaluating the impact of policy interventions and understanding what works, with a lack of awareness, let alone sharing, of case studies elsewhere
  • difficulties in sharing data
  • the siloed nature of government complicating cross-departmental working
  • policy asks often being dictated by politics, with electoral cycles leading to short-termism, ministerial churn changing priorities and personal style, events prompting rushed reactions, or political priorities dictating ‘policy-based evidence making’
  • a rush to answers before understanding the problem
  • definitional issues about what policy actually is making it hard to get a hold of or develop professional expertise.  

If we’re defining ‘policy’ and the problem, we also need to define ‘AI’, or at least acknowledge that we are not only talking about new, shiny generative AI, but a world of other techniques for automating processes and analysing data that have been used in government for years.

So is ‘AI’ able to help? It could support us to make better use of a wider range of data more quickly; but it could privilege that which is easier to measure, strip data of vital context, and embed biases and historical assumptions. It could ‘make decisions more transparent (perhaps through capturing digital records of the process behind them, or by visualising the data that underpins a decision)’; or make them more opaque with ‘black-box’ algorithms, and distract from overcoming the very human cultural problems around greater openness. It could help synthesise submissions or generate ideas to brainstorm; or fail to compensate for deficiencies in underlying government knowledge infrastructure, and generate gibberish. It could be a tempting silver bullet for better policy; or it could paper over the cracks, while underlying technical, organisational and cultural plumbing goes unfixed. It could have real value in some areas, or cause harms in others.

Lessons from digital transformation

The irony of early GDS defining ‘policy’ most clearly is that it was defining itself against policy. Mike Bracken, its director from 2011 to 2015, argued at the time that ‘delivery to users, not policy, should be the organising principle of a reformed civil service’. 

That digital transformation moment is a useful analogy for thinking about AI and policymaking. Early digitisation involved moving existing processes and systems online, forms, and websites. True digital transformation involved rethinking services end-to-end: rather than ‘simply’ putting a form on a website, the service would be redesigned around the user and the problem, ‘applying the culture, practices, processes and technologies of the internet-era to respond to people’s raised expectations’. (The ‘simply’ is in quotation marks because it seldom is, and those less ‘transformative’ changes can be of huge value - think of a benefit claimant facing a shorter, simpler, less intimidating form.) Why issue new vehicle tax discs when you can use cameras to check number plates against a vehicle register? Why not link data to understand someone’s benefit entitlement and issue it automatically?

We could think about how to use AI at different stages in the existing policymaking process (the digitisation analogy). Or we could think whether AI allows us to imagine a fundamentally different process (the transformation analogy), while remembering that, for all we might imagine something new and radical for the future, a lot of AI systems actually embed the past, shaped by existing data, assumptions, processes and power relationships.

Two other digital transformation lessons. First, has it delivered on some of the more ambitious claims from the time, will transformation based on AI live up to the current hype? Second, how should we measure success? A lot of efficiency and money savings were claimed, but so too were improvements for citizen experience. Any AI-related state transformation should not just be about efficiency, but efficacy, effectiveness, and public good.

Addressing distance in policymaking

One common theme of existing policymaking problems is distance: between different professional specialisms (classically, policy and delivery), silos (data, departments), past and present (never learning from history), and between bureaucrat and impact (evaluation). Better use of technology may not be a panacea but could help narrow some of that distance, using AI to support better analysis and sharing of information.

Perhaps the key distance is between state and stakeholders, particularly the public. The Incubator for AI has already built (and published details of) a tool to analyse responses to public consultations, noting that ‘fair, effective and accountable’ automation could find patterns and analyse data, freeing up public servants to understand those patterns. Given that ‘30,000 responses requires a team of around 25 analysts for 3 months to analyse the data and write the report’, and some consultations get a lot more, this could be a big win.

But if we know that AI is analysing our responses, might it change how we respond? Will people try to game the tool, so our response rises to the top, SEO but for generative AI? And, back to our digital transformation analogy, is this ‘merely’ using AI to fix or upgrade the existing consultation process, rather than rethinking it altogether?

‘Digital tools and practices are emerging that enable a much wider range of consultation techniques than the survey-based method that is currently dominant’, said a GDS-commissioned report on improving consultations in 2016. It noted ‘ideation’ and ‘deliberative discussion’ could elicit different, deeper and broader views. Digital tools can help: pol.is uses machine learning to help understand what groups of people think and find common ground, perhaps most famously in Taiwan, where it is part of an online and offline consultation process, but also by Policy Lab in the UK. But if consultations are broken, we should also be looking to analogue tools: the London Borough of Camden co-created its Data Charter with citizens through in-person panels; Connected by Data convened a People’s Panel on AI around an international summit. Distance is diminished through dialogue.

Discussing the right questions

AI may have its uses in improving policymaking, but as part of a toolkit of approaches, rather than as a dominant paradigm by itself. We should welcome pilots testing this. And we should welcome the broader opportunity to rethink and reform how we do policy. To do this, we need to be led by basic questions such as what does good policymaking look like? What is currently wrong with policymaking?, and ask what new approaches technology can enable, rather than being led by the technology itself.

It may seem ironic that a strength of some AI tools, like pol.is, may be to enable greater human participation in policymaking (doubly so that possible weaknesses, like gaming consultation responses, may also push us in that direction). But it is a salutary reminder to centre humans in this discussion. Many flaws in policymaking are human, not technical, and so will the solutions be. The decisions to deploy technology to support policymaking, for good and ill, will be made by humans. Better policymaking should ultimately make lives better for humans, which is why we should listen to them.

Note

  1. The Government Digital Service (GDS) was established within the Cabinet Office in 2011 to drive the digital transformation of services across government, beginning with the government website. It brought new digital-age approaches, such as testing and iterating and working in the open. It was split into GDS, for delivery, and Central Digital and Data Office, for strategy, standards and support, in 2021. Both moved to the Department for Science, Innovation and Technology following the 2024 general election.

About the author

Gavin Freeguard is Associate at the Institute for Government and Connected by Data, and special advisor at the Open Data Institute.

Animation of people going up or down escalators with binary code in the back ground

This idea is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good