Skip to main content
Reflection
AI for public good

An update on AI at JRF

Reflections on JRF’s AI for public good programme - what we've learnt so far, and what's next for this topic.

Written by:
Yasmin Ibison
Date published:
Reading time:
14 minutes

Introduction

In 2023, JRF launched a new workstream exploring the potential for AI to generate public good. This programme was an example of our newer work which highlights original thinking in response to deeper, cross-cutting issues related to our mission. While the work sits within our Insight and Policy Directorate, it was not designed as a traditional policy programme. Rather, we sought to position ourselves as an explorer, whose role was to pose interesting questions before convening and commissioning individuals to generate new ideas in response to them.

Instead of centring a singular perspective, we prioritised plurality. Instead of focusing solely on technical expertise, we embraced sociotechnical perspectives – viewing people, technology and society as interconnected parts of a whole system. Our aim was to broaden and deepen social commentary on AI, cutting through the hype and engaging new audiences. In particular, we wanted this work to speak to those who may be new to the topic, those who come from non-technical backgrounds and those feeling excited, nervous or overwhelmed by AI (or a combination of those!)

This blog wraps-up the AI for public good programme, which has now come to an end. It recaps the work we have delivered and reflects on what we have learnt. Finally, we discuss how we will be taking forward AI as a topic at JRF.

What have we done?

Since launching, the AI for public good workstream has delivered activity across 4 main themes:

AI and the power of narratives

Consisting of 4 essays and an accompanying webinar examining the impacts of mainstream narratives on AI. Essay writers dissected AI narratives in relation to digital inclusion, AI literacy, data commons and public engagement.

“AI is a technology deeply entwined with ‘narrative’, not only through fictional stories and myths influencing its reception, but also the wider cultural structures, beliefs and ideologies that shape debate, and provide competing interpretations of the latest developments.”

- Dan Stanley, Executive Director at Future Narratives Lab

AI and civil society

Encompassing a mixed-method research project, that we commissioned We and AI to deliver, 51 organisations were engaged in a survey, discussion groups and interviews to explore:

  • Why and how do non-profit and grassroots organisations engage with generative AI tools?
  • What are the main drivers and key elements that shape this engagement?
  • How do these organisations see their role in shaping the broader AI debate?

"We are a grassroots organisation, we have no plan as to how we’re going to use [generative AI] at the moment. It’s just lending itself quite nicely, and we’ve got absolutely no thoughts as to how we need to control that."

- Discussion group participant

Research findings, outlined in both a report and a webinar, reveal that promises of generative AI’s efficiency gains is causing much excitement, with high levels of adoption.

However, most organisations lack formal governance structures concerning generative AI usage, which may expose them to risks. Non-profit and grassroots organisations are also largely excluded from AI discourse, and there was limited consensus on the sector’s role in shaping debate or the levers available to influence decisions.

AI and the public sector

We asked 7 experts the question: 'To what extent can AI improve public policymaking?' Authors produced a range of reflections - from the cautiously optimistic, to the fiercely critical.

“AI may have its uses in improving policymaking, but as part of a toolkit of approaches, rather than as a dominant paradigm by itself. We should welcome pilots testing this. And we should welcome the broader opportunity to rethink and reform how we do policy.”

- Gavin Freeguard, Associate at the Institute for Government and Connected by Data, and special advisor at the Open Data Institute

AI, power, relationships and values

This theme interrogated AI developments in relation to power structures and political economy. As part of this, JRF designed and delivered a workshop, convening diverse actors from across the AI ecosystem, including government, civil society, media, academia, and philanthropy. They were invited to reflect on the strengths and limitations of different countervailing power sources relating to AI, and to identify opportunities for shared agendas and action.

“The more we trust these companies to become the nervous systems of our governments and institutions, the more power they accrue, the harder it is to create alternatives that honour alternative missions.”

- Meredith Whittaker, President of Signal

What have we learnt?

JRF is not an organisation known for working on technology; a central aim of the AI for public good programme was to prioritise learning as an explicit outcome of this work – both for ourselves as an organisation, as well as our broader ecosystem.

Learning from this programme can generally fall into 2 categories:

  1. What are we learning about AI that relates to, informs or influences our own work at JRF?
  2. What are we learning about the value of delivering work in this way?

Below we outline some of our learning.

The challenge with language

AI suffers from a language problem – as Ada Lovelace Institute’s Imogen Parker points out: ‘There is no single, agreed-upon definition of ‘AI’. AI can refer to a scientific field of study, a series of products and services, or features of a product or service.’ Conflicting definitions for ‘AI’ mean that it is not always clear what exactly is being discussed. The desperate need for precise language when talking about AI comes up repeatedly, from academic and policy circles to broader civil society. It was also a challenge ensuring we were on the same page when designing this workstream. Such imprecision in language also means that the scope of what is widely agreed to be ‘true’ or ‘factual’ about AI seems incredibly small, with few claims going uncontested. It is hard to find a glossary of AI terms without a disclaimer about the evolving nature of language in this space – and such nuances are welcome. However, such contestations around language and definitions, rather than simply linguistic tautology or academic jousting, can have real consequences.

AI’s indefinability does little to widen or deepen public engagement in the sector’s development and does even less to improve AI literacy levels. Public discussions about AI often rely on narrow, hyperbolic and even harmful narrative framings – anthropomorphising AI, elevating tools to have human-like status, or drawing upon familiar, yet inaccurate, sci-fi metaphors as descriptors. Therefore, while there is growing awareness amongst the public about AI, this does not equate to widespread understanding of how such tools operate, nor does it enable lay people to critically question developments, or separate facts from opinion.

For our own work on AI, a crucial learning has been to recognise and respond to the sensitivity and debates around language, particularly as much of the content produced as part of this workstream has been for non-specialist audiences. Whether that be through providing a key of terms and definitions document ahead of webinars, commissioning a set of bespoke illustrations to reflect the themes of the programme, being open to answering questions from our audiences, or trying to ensure commissioned content is as jargon free as possible, we have tried to demystify AI wherever possible, and make the topic more tangible.

AI adoption - hitting the target, but missing the point?

Turbocharging AI adoption across the public and private sectors is being posited as ‘the ultimate force for change and national renewal’. Earlier last month, Prime Minister Keir Starmer agreed to adopt all 50 recommendations from the AI Opportunities Action Plan produced by tech entrepreneur and venture capitalist Matt Clifford. The Plan emphasises how Government must push hard on ‘cross-economy AI adoption’ as core to ‘how we think about delivering services, transforming citizens’ experiences, and improving productivity’; the penultimate recommendation is to ‘drive AI adoption across the whole country’. While the Plan itself makes no mention of civil society, calls to adopt AI at all costs are taking hold here too.

One way that some are thinking about AI for public good is to support ‘for good’ organisations to embed AI products and services into their operations, to enhance their impact. There is, therefore, much focus on understanding (and in many cases, promoting) AI adoption in the third sector. The Charity Digital Skills annual survey explores AI adoption and CAST also conducted an AI survey to examine trends in the sector. We also commissioned work to understand grassroots and non-profit perspectives on generative AI. In general, trends reveal high AI adoption rates within the sector, coupled with poor governance practices. There is also a sense of urgency to adapt to AI as quickly as possible or risk being behind the curve. In response, increasing numbers of organisations are convening the sector around AI adoption providing workshops, hackathons and training. The premise seems to rest on the equation that using AI to enhance social and/or environmental missions is a valid means to an end; more AI in such settings equates to more ‘good’. However, should more AI be the goal?

A key learning from our own work on AI is that whilst focusing on AI adoption may seem like the low-hanging fruit to achieving ‘public good’, it is a myopic approach. We should consider AI as a system comprising many parts and focusing on adopting the end products or services misses a much bigger, more complex picture.

Instead, we must regard AI as technologies situated within socioeconomic, political and power-based hierarchies. We must look beyond AI as simply a collection of tools that support grant applications or summarise texts. Such tools are products of global systems entwined with the extraction of natural resources, exploitation of labour and workers, and production of extreme wealth. Such systems indeed underpin many social/environmental justice missions and should therefore be of inherent interest to third sector organisations and their work.

However, perspectives from the sector on AI’s impact on society, power and politics are largely missing. This gap grows more acute when we look for voices emanating from organisations that do not have data, tech or digital aims. As such, we adapted our own AI programme to explore AI’s relationship to power and wealth.

There is real opportunity for progressive tech-focused organisations to build coalitions with relevant socio-environmental organisations; the appetite is there, as are the ideas. Take Rachel Colidcutt’s essay ‘People Not Code: The Case for a Digital Civil Society Observatory’ that argues for an evidence-gathering body working alongside government to ‘draw on the empirical knowledge and expertise of the broad field of civil society to anticipate, understand, and mitigate the ongoing societal impacts of technologies and ensure that innovation delivers public benefit and a stronger society’. Or Connected by Data’s Data and AI Civil Society Network as a brilliant example of cross-sector convening.

The third sector needs to shift its focus from blind adoption as a marker of success, to developing a more critical understanding of AI systems as politically, economically, environmentally and socially situated.

Building the plane while flying

Exploratory work at its best is also emergent in nature. It can involve ‘building the plane while flying’ approaches, developing a strategy or approach to a project whilst working on it, rather than setting out plans in advance.

Indeed, we did not develop a pre-set project plan, list of activities or CRM of relevant contacts for our AI work; nor did we wish to present a set of concrete solutions, technical policy briefings or blueprint for the future. Instead, we proposed some loose themes and questions, with ideas and activity emerging progressively. Working in this way, however, does present challenges.

As our external profile and relevant networks related to AI expanded, so too did the range of opportunities to commission interesting work. It was a classic snowball effect – new connections and new ideas flowing from their predecessors. Yet, as interest in the work grew, it became challenging to balance the needs and expectations of our audiences.

On reflection, there was perhaps an underlying tension between our aim of speaking to and engaging the broader social sector, most of whom were absent from AI discourse, through commissioning experts who had been writing, researching and thinking about data, technology and AI for many years. While the former group of non-AI experts welcomed our exploratory, ideas-based commissioning and convening, some from the latter group were keen for us to go further.

For some, the expectation was that commissioning essays and convening events was the preliminary stage of a broader policymaking process, after which we would adopt a policy-position on AI, alongside advocating for, and funding, policy solutions or programmes. Such assumptions are hardly surprising, as this is the type of work our organisation is known to deliver. They were keen to understand the longer-term strategy for this workstream and provided helpful challenge to the view that connecting ideas to new audiences is an end in and of itself. Managing these expectations and clarifying our position was key.

Yet, while our work externally was in many ways taking off, internally our own organisational knowledge and approach to AI was stalling. Initially, most of the time dedicated to AI was spent looking externally. In this, we failed to capitalise on the slightly meta-realisation that when it comes to AI, we are a key audience for our own work. While we deliberately chose to work on AI themes that felt relevant to JRF - narratives, the non-profit sector, public policymaking and power – more time could have been spent socialising this work internally from the outset. Rather than seeing this work as external to JRF, we could have ensured our own colleagues were strategically engaged in the ideas commissioned, and intentionally reflecting on how these could inform their work, from the outset.

Developing an AI workstream within an organisation that initially had little internal AI expertise meant that, paradoxically, JRF itself also embodied some of the challenges reflected in the non-technical audiences we wished to engage through this work. Again, when thinking back to the mapping of our primary audiences for this work – mainly individuals and organisations within the social sector with little AI expertise - we can see JRF as an organisation reflected in their characteristics. Looking internally, many of us were disengaged from AI debates, perhaps falling victim to narratives that AI was for IT or technology experts. It is likely some people found the hyperbole surrounding the topic overwhelming and therefore feared saying ‘the wrong thing’. In some cases, people may have struggled to see how AI connected to their work.

If we had better socialised our AI workstream within the organisation it could have helped gain interest and buy-in earlier, raising organisational literacy levels (beyond getting to grips with new tools) and challenging some of the harmful narratives or assumptions that can take hold, which discourage engagement.

What next for AI at JRF?

The learning from our AI work to date has informed and been integrated across the organisation in various ways.

JRF’s AI strategy

The irony of delivering an external AI workstream, without interrogating JRF’s own organisational approach to AI eventually became too glaring to ignore. In response to this, we commissioned the team at Careful Industries to support us in developing an organisational AI strategy. The scope of the work involves:

  • understanding the current internal use of AI tools by JRF colleagues through conducting an organisational wide anonymous survey
  • understanding and assessing potential uses for AI at JRF and how such options align with our mission, values and risk appetite
  • developing an organisational strategy, framework or set of guidelines on AI use at JRF
  • facilitating an organisation-wide AI learning session for all JRF staff.

It is important to us that this piece of work does not sit solely with the JRF Executive; nor do we want activity to be delivered in a top-down format. An internal working group, comprising staff with a range of roles, was therefore recruited to collaborate with the Careful Industries team.

This work is ongoing, and we plan to reflect on the process and outputs in due course.

Policy and ideas

As the Government doubles down on its rhetoric that supercharging AI will drive economic growth for the UK, it is unlikely that AI will stray far from policy debates – from the impact on workers, marginalised communities and public services, to the influence on democracy, social cohesion and environmental and physical infrastructure.

While technology, or AI specifically, does not feature as a standalone policy area at JRF, there may be future opportunities to integrate a sociotechnical lens in our policymaking work exploring the dynamics and drivers of poverty in the UK.

Insight infrastructure

The aim of JRF’s Insight Infrastructure programme is to build a better picture of socio-economic inequalities in the UK, by democratising access to high-quality data and evidence through open collaboration and innovation. The Insight Infrastructure team are integrating the learnings from the AI for public good programme in various ways.

They will soon convene a Data and Technology event series focused on peer-to-peer learning, insight sharing and building collaborations. Events will work towards democratising the access to and use of data, helping to improve and link up existing data, supporting work that creates or improves novel datasets from non-standard sources and building community among those who are working at the intersections of technology and social justice. Some events will speak specifically to AI, for example, DataKind will host an event series for the third sector, building on JRF’s research on generative AI in the non-profit sector.

In addition, JRF’s Grounded Voices programme is an ongoing, in-house qualitative research programme to ensure JRF's work is informed by what matters to people struggling to afford what they need. As part of this, they have been collaborating with a partner that shares our values to test how AI can support this research.

Finally, the Insight Infrastructure team have funded the development of the Access to Social Care chatbot. The chatbot, AccessAva, uses AI to provide clear expert legal information and guidance on how to navigate the health and social care system. It guides people through a series of steps to identify their needs, before providing template letters and other useful information to help people take the right next steps.

Conclusion

The AI for public good workstream was, in essence, an experiment in delivering cross-cutting, emergent work. Moving forward, this work will not be continuing in a stand-alone programmatic format. However, technology and AI are becoming ever more embedded across JRF through our ways of working, understanding of poverty’s dynamics and drivers, or the communities we convene.

Animation of people going up or down escalators with binary code in the back ground

This reflection is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good