Skip to main content
Animation of people, consuming media, with talk bubbles on public transport
Reflection
AI for public good

AI and the power of narratives

Yasmin Ibison reflects and reacts to the essays of 4 exciting and creative writers in our series exploring AI for public good.

They explore the impacts of mainstream narratives on AI, the stories they tell and the voices they do and don’t include.

Written by:
Yasmin Ibison
Date published:
Reading time:
6 minutes

To kick off JRF’s AI for public good programme, we asked 4 innovative thinkers to explore the question: how can we ensure that mainstream narratives concerning AI build literacy, balance potential risks and rewards, and foster genuine public engagement?

In our introductory blog we spoke about the polarising visions of AI that clog up current mainstream media narratives, with claims that AI will either save or end the world. If you have a vested interest in AI, or work in the field, it’s not hard to see past the sensationalism. Yet, for most people, the quick-digest news and clickbait headlines have an insidious effect – they keep us critically unaware, afraid and feeling powerless to the march of this technology. We skim the surface level of AI discourse, whilst fundamental developments and decisions pass us by.

There is a real need, therefore, to dive beneath the surface, shedding light on the power of mainstream AI narratives, how they impact us, and how we can reshape them. These essays do just that.

Below, we map out the arguments of all 4 of our essay writers, before JRF’s Senior Policy Advisor, Yasmin Ibison, reflects on what these pieces teach us, and what questions remain.

What did our authors say?

Dr Emma Stone, Director of Evidence and Engagement, Good Things Foundation

Dr Emma says AI is shifting the goalposts of digital inclusion. She debunks the narrative myth that heralds AI as advancing life for everyone and instead reminds us of the millions of people who already lack the most basic digital skills and connectivity. These individuals will be rendered invisible in the datasets, design and debates surrounding AI. Emma says:

It is hard to see how we can shape a fairer future in which AI benefits everyone without first fixing the UK’s digital divide.

Yet she is also hopeful. She’s platforming and calling for more active civil society organisations committed to upskilling individuals in AI, and monitoring AI developments at a systemic level.

Tania Duarte, Founder, We and AI and Ismael Kherroubi Garcia, Founder and CEO, Kairoi

Tania and Ismael highlight the relationship between AI narratives and AI literacy in their essay. They argue:

The public at large should have the skills to know and understand AI concepts; these skills should not be developed through a solely technical lens, instead AI literacy programmes must also consider the social implications of such technologies.

The UK lags behind its peers with regards to developing and implementing AI literacy frameworks and interventions. Failing to improve public literacy levels allows harmful AI narratives to take root. This has 2 compounding effects.

Firstly, in the absence of government or civil society literacy programmes, tech companies fill this market gap, and their market dominance perpetuates the presentation of AI in an unambiguously positive light. Secondly, overlooking AI literacy strengthens the tech industry’s ability to wield narrative power influencing the terrain of policy, as well as limiting public input. Conversely, increasing public literacy increases public power and voice, particularly in relation to democratic and civic engagement.

Dan Stanley, Executive Director, Future Narratives Lab

Dan explores how AI is a technology deeply entwined with narrative. He begins by charting the landscape of current debates between AI supporters and opponents, before highlighting the worryingly extreme, deeper belief systems that both views are based on.

He then focuses on what we don’t talk about in relation to AI – namely the hidden human and environmental costs. Deftly weaving us between what is foregrounded and what, crucially, is hidden, he dives deep into the power behind AI. Dan writes:

Mass consumption of readily available public data… is fundamental to the extractive way that AI works, and our predominant narratives of data are vital to this. If we are going to engage the public meaningfully in the root causes of AI systems, we need to build a public concept of data, as a collective, common good.

Jeni Tennison, Founder and Executive Director and Tim Davies, Research and Practice Director, Connected by Data

Jeni and Tim argue that advocating for meaningful public engagement is an antidote to AI narratives that reinforce unequal power and decision-making hierarchies. They explain that:

AI is often seen as a global and cross-cutting phenomenon and therefore in need of global and cross-cutting rules and regulation. This can lead us to forms of decision-making that either ignore people altogether, or aim for shallow but broad, often technology-mediated, participation.

However, the People’s Panel on AI shows there is an alternative approach. This group attended AI Fringe sessions, spoke with experts, and got practical experience of AI tools. Through reflection and deliberation, they collectively produced recommendations for industry, government, academic and civil society stakeholders.

What have we learnt, and what questions are we left with?

On the surface, AI narratives draw from, and reinforce, shared cultural reference points such as science fiction imaginings of robots and other intelligent machines. Yet, in drawing attention to these familiar tropes, these narratives also obscure fundamental truths on the way AI operates and its impact. Many of the writers highlighted how mainstream narratives hide the real environmental and human costs of AI, limiting the focus to the age-old balancing act of risk and regulation, alongside speculative existential concerns.

Deeper analysis, however, reveals a need to focus acutely on data as foundational to AI. Many AI systems are built on biased or incomplete data, with digitally excluded people’s perspectives missing altogether, leading to flawed policy, decision-making and outright discrimination.

So, to overcome these data gaps, should we collect more and better data to train AI tools, targeting those that are disproportionately marginalised? Or should we change how we perceive data – shifting narratives to focus on data as a collective good over an individual asset? Or should we challenge the notion of transitioning to an increasingly ‘datafied’ world altogether?

To shift AI narratives to serve public interest, we need an empowered civil society leading change on multiple levels; this was a common theme across all the pieces. For individuals and organisations, civil society can deliver AI skills and literacy programmes, connecting AI tools to people’s everyday realities and providing safe spaces for experimentation. Civil society organisations hold relational and community power. They are expertly situated to consider both social and technical implications of AI in relation to the communities they serve.

Civil society can also lead the charge to build new data and AI narratives that speak to people’s values, identities, concerns, and realities – narratives in which the experiences of those already impacted by AI, and those whose perspectives are currently excluded, are centred.

On a structural level, civil society organisations can model strong civic and democratic engagement – holding government and industry accountable, advocating for equity and justice, strengthening public voice and taking collective action. Yet the question remains as to whether they can ever be equal collaborators with tech companies? Could the AI tech industry work in partnership with civil society – in such a relationship, could the scales of power ever tip in public favour over profit?

All our writers agreed that there is a need to scale solutions from the ground-up – such as reshaping data as a common good, expanding public participation in AI, or building nationwide literacy programmes. This will require long-term resource, patience, and imagination to yield results. A final question, then, is what roles can government and funders play to build the ecosystem ripe for scaling such solutions?

As always, we invite you to join the debate – if you have any thoughts or questions about our AI for Public Good programme, you can contact Yasmin at: yasmin.ibison@jrf.org.uk or on X -  @yasminibison.

Illustration: Yasmine Boudiaf & LOTI / Better Images of AI / Data Processing / Licenced by CC-BY 4.0

Animation of people going up or down escalators with binary code in the back ground

This reflection is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good