Skip to main content
Idea
AI for public good

Frontier AI: double-edged sword for public sector

New approaches to public policy and decision-making need a comprehensive understanding of what this remarkable technology offers, and its potential implications.

Written by:
Zeynep Engin
Date published:
Reading time:
8 minutes

The power of the latest AI technologies, often referred to as ‘frontier AI’, lies in their ability to automate decision-making by harnessing complex statistical insights from vast amounts of unstructured data, using models that surpass human understanding. The introduction of ChatGPT in late 2022 marked a new era for these technologies, making advanced AI models accessible to a wide range of users, a development poised to permanently reshape how our societies function.

From a public policy perspective, this capacity offers the optimistic potential to enable personalised services at scale, potentially revolutionising healthcare, education, local services, democratic processes, and justice, tailoring them to everyone's unique needs in a digitally connected society. The ambition is to achieve better outcomes than humanity has managed so far without AI assistance. There is certainly a vast opportunity for improvement, given the current state of global inequity, environmental degradation, polarised societies, and other chronic challenges facing humanity.

However, it is crucial to temper this optimism with recognising the significant risks. In their current trajectories, these technologies are already starting to undermine hard-won democratic gains and civil rights. Integrating AI into public policy and decision-making processes risks exacerbating existing inequalities and unfairness, potentially leading to new, uncontrollable forms of discrimination at unprecedented speed and scale. The environmental impacts, both direct and indirect, could be catastrophic, while the rise of AI-powered personalised misinformation and behavioural manipulation is contributing to increasingly polarised societies.

Steering the direction of AI to be in the public interest requires a deeper understanding of its characteristics and behaviour. To imagine and design new approaches to public policy and decision-making, we first need a comprehensive understanding of what this remarkable technology offers and its potential implications.

Transforming data into evidence

Until the latest forms of AI, 'evidence-based policy' was meant to refer to decisions grounded in objective insights derived from scientific methodologies. Official statistics were constructed through carefully designed data and established mathematical and statistical models. While challenges with the availability, accessibility, and objectivity of high-quality data persisted, established statistical paradigms (such as Bayesian inference) allowed scientific evidence to be generated ‘logically’ even with subjective and imperfect data. The role of statistics and scientific methodologies was relatively clear, although it required highly skilled knowledge brokers and scientific advisors to translate these insights into the policymaking realm. The ongoing challenges included the ever-changing complexity and context of policy problems, as well as the biases and imperfections inherent in established human and institutional decision-making processes.

With AI, we now have the luxury of processing massive amounts of unstructured data to produce seemingly very useful insights in real-time. From free-text citizen feedback forms to organic online social media activity and everyday transaction data from a variety of sources, we have sophisticated algorithmic processes that can perform impressive analyses and present the outcomes neatly. The catch is that the ‘logic’ of this processing is no longer comprehensible to us as humble human beings. While we witness that AI is increasingly getting ‘better’, gradually surpassing human performance in many critical tasks (for example pattern recognition, large-scale data integration) and proving its usefulness in domains typically suffering from human skills and resource shortages (for example healthcare, agriculture, environmental monitoring and conservation), we are no longer able to interpret and reason through the outcomes of such highly complex statistical processing.

Whether these AI-derived insights still count as 'scientific evidence' is a fascinating debate. Whether we should use such insights in high-stakes policy contexts is also a complicated question. Additionally, policy-making contexts remain complex, biased, and imperfect. Still, the prospects of generating much better syntheses of all available data and evidence in real-time, along with improved foresight capacities for policymaking, are too promising to ignore.

Automating decision-making

New types of ‘evidence’, combined with automated decision-making, introduce the next level of complexity for AI-powered policy-making and public service delivery.

First and foremost, it is essential to understand that AI's automation is not like the predictable efficiency gains of factory robots operating in controlled environments. Instead, it is more akin to an assistant, one that can be brilliant at times, but boring or erratic at others, and generally unpredictable. Such an approach increasingly involves artificial agents interacting with humans and other systems in ways never experienced before, requiring significant levels of cognitive and behavioural adaptation on the humans' side. We are also set to face a future where artificial agents increasingly interact amongst themselves and with physical infrastructure (for example AI assistants interacting with each other and with city infrastructure to plan a meeting), with the impact of their ‘actions’ propagating across our entire and increasingly hybrid human-machine ecosystems.

In government contexts, AI automation extends beyond merely streamlining administrative and bureaucratic processes, to sharing human and institutional decision-making powers in various forms. In using AI as a tool for decision-making, human cognition no longer focuses primarily on solving complex data-heavy problems, but rather on asking the right questions in machine-executable forms and ensuring the outcomes of such complex processing are not harmful to humans and the environment. In decision-making environments with many types of AI agents in the ecosystem, human cognition needs to recognise and respond appropriately to algorithmically initiated or supervised actions and navigate the new complexities they introduce into existing human and institutional decision-making processes.

The central conversation in this debate revolves around the acceptable degrees of automation in decision-making, specifically, determining the balance between human oversight and machine autonomy. With AI automation, we are not just optimising process efficiencies, as in industrial automation, but delegating aspects of human decision-making power. This delegation introduces algorithmic subjectivity, biases, imperfections, and potential issues such as ‘hallucinations’ or ‘maliciousness’ into decision-making processes, which have the potential to impact human lives and the environment in ways that differ from traditional human and institutional decision-making errors.

Despite these risks, one clear advantage of AI automation in public decision making is that it forces us to confront and address fundamental problems (such as bias and discrimination) in our existing decision-making processes. This moment presents an opportunity to rethink and reimagine how we approach the challenges that have long plagued humanity, challenges that may have been overlooked or inadequately addressed so far (such as racism and environmental damage).

Personalising public services

One of the most transformative promises of frontier AI in the public sector is its potential to personalise services. Traditional human and institutional processes have historically offered ‘standardised’ services, designed to meet the needs of the largest possible population groups within the constraints of limited resources. However, frontier AI introduces the capacity to tailor services to the individual level, whether by customising medical treatments, optimising educational experiences, or responding to citizen inquiries in real-time.

This ability to deliver personalised services signals a profound opportunity to build more equitable societies, where individuals are treated according to their unique circumstances and potentials rather than being fit into one-size-fits-all solutions. The promise is a future where public services are not only more efficient but also more just, responsive, and effective.

However, this potential comes with a significant challenge: the foundational governmental principle of equal treatment for all citizens. A paradox arises when we attempt to hyper-personalise services within the framework of established ‘standardised’ policy structures that were originally designed to ensure fairness and equality. The very act of tailoring services to individual needs opens a new space for types of inequalities, imperfections, and favouritism in policy and decision-making.

This creates a complex balancing act: how do we harness the efficiency and effectiveness of personalisation without compromising the principles of fairness and equal treatment? The private sector’s success in leveraging AI to personalise services in relatively low-stakes contexts, such as marketing and advertising, is evident and demonstrates the possibilities of this technology. However, in high-stakes public sector contexts, such as public resource allocation, law enforcement, and neighbourhood safety, the implications are far more consequential. Here, the question shifts from whether we should personalise services to whether we can afford not to, especially as private sector actors increasingly set the standard.

The cost of over-personalisation could involve the erosion of fundamental democratic principles, while the cost of insufficient personalisation might include inefficiencies, misallocated resources, and diminished public trust. Striking the right balance demands a nuanced approach that weighs the potential and risks of AI-driven decisions. It requires continuous dialogue and adaptation to ensure these systems align with our core values and principles.

Beyond frontier AI technology

Algorithms are increasingly, and often invisibly, shaping decisions in medicine, transportation, urban planning, finance, defence, dispute resolution, crisis management, elections, information dissemination, and resource allocation. When there is relatively objective data and clarity in expected outcomes (for example medical diagnostics or transport), frontier AI excels. But when data is questionable, and creativity and judgment are required in decisions (for example sentencing or hiring), frontier AI can reinforce historical problems and create new challenges in decision-making, automation and personalisation.

The potential is there to develop more robust, ethical, and effective frameworks for governance of these technologies that better serve society’s needs; but only if we manage to have this conversation on the right terms. A complex interplay of factors, including infrastructure, socioeconomics, and politics, shape AI's impact on governance. Recent global IT disruptions highlight the vulnerabilities of our interconnected systems, and as AI developments become increasingly concentrated in the profit-driven private sector, shifting power dynamics challenge how public service delivery models will be designed.

Ultimately, the successful integration of frontier AI into public governance will depend on our ability to navigate these complexities with foresight, transparency, and a commitment to the public interest. Only then can we truly unlock the promise of AI to improve societal outcomes and address the pressing challenges of our time.

About the author

Dr. Zeynep Engin is the Founder, Chair and Director of Data for Policy CIC and Editor-in-Chief for Data & Policy at Cambridge University Press as well as a Senior Research Associate at UCL.

Animation of people going up or down escalators with binary code in the back ground

This idea is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good