Skip to main content
Illustration depicting an adult and a child outside a greenhouse, while another person is gardening.
Report
AI for public good

Grassroots and non-profit perspectives on generative AI

Why and how do non-profit and grassroots organisations engage with generative AI tools, and the broader AI debate? 

Executive summary

As AI technologies evolve, they must benefit the whole public. To achieve this, everyone must be given the platform to participate in conversations about the design, use, and governance of AI. Non-profit organisations, particularly those that are grassroots, have so far not been adequately represented in such discussions or decision-making.

Recognising this, JRF commissioned We and AI to conduct mix-method research exploring:

  • Why and how do non-profit and grassroots organisations engage with generative AI tools?
  • What are the main drivers and key elements that shape this engagement?
  • How do these organisations see their role in shaping the broader AI debate?

Promises of generative AI’s efficiency gains are causing excitement

Non-profit and grassroots organisations are excited about the potential for generative AI to quickly increase their productivity, particularly in response to increasing economic pressures and rising service demands. Many are rapidly experimenting with and adopting such tools, hoping for significant benefits:

  • 78% of non-profit and grassroots organisations use generative AI tools in some capacity
  • 71% of organisations using generative AI tools do so to work more efficiently
  • 63% of organisations apply generative AI tools in advertising, marketing, PR, and communications
  • 47% of organisations use generative AI tools to save labour costs.

Tools are described as cost-cutting solutions in financially uncertain environments, helping to lighten workloads, remove administrative barriers, and enhance service delivery.

Our economic predictions have been bang on so far, sadly... AI is possibly the biggest opportunity we’ve seen in decades.

Interviewee

They also help address accessibility needs, acting as personal assistants for staff and making services more accessible to beneficiaries. Uses include generating closed captions, creating accessible written summaries, providing automated translations, and generating alt-text for images.

However, these successes are not evenly distributed:

While most organisations lack formal policies concerning generative AI usage, larger organisations appear more prepared:

  • 73% of organisations do not have policies or guidelines about AI in place.

Of those organisations with AI policies or guidelines, two thirds have annual incomes exceeding £1 million.

Smaller organisations, particularly those without existing exposure to new technologies, or high levels of digital literacy, or whose values conflict with generative AI usage (such as climate concerns), either avoid using these tools, or do so without proper governance structures.

A discussion group participant said:

"We are a grassroots organisation, we have no plan as to how we’re going to use this [generative AI] at the moment. It’s just lending itself quite nicely, and we’ve got absolutely no thoughts as to how we need to control that."

A lack of governance could become increasingly concerning, in leaving organisations exposed to risks. Similarly, short-term efficiency gains may not be sustainable due to the (unforeseen) time needed to monitor outputs, manage compliance, quality, and trust issues, and adapt to new tools.

They may also undermine organisational cohesion, trust and internal values:

Despite high adoption rates, organisations share many concerns regarding generative AI tools:

  • 70% are concerned about data privacy and security
  • 63% worry about accuracy
  • 57% are concerned about representation and biases.

Navigating ethical dilemmas around trust, disclosure and organisational values is also challenging:

  • 70% of organisations who are using generative AI say they 'somewhat' trust outputs from tools
  • 15% of organisations disclose their use of generative AI.

Even when organisations are aware of the risks and want to act ethically, there is a lack of clarity around what this means in practice. For some, truly following their organisational values might mean stopping using generative AI tools altogether. Yet, this may prove difficult given the financial incentives to use such tools and the ease with which they are embedded across common digital platforms.

And they may distract from developing more appropriate, affordable, or transformational solutions:

Generative AI may not be the ʻsilver bulletʼ many ʻtechno-solutionistʼ narratives claim. As more organisations adopt these tools, short-term competitive advantages for early adopters may be harder to maintain. Organisations must still compete in funding cycles – as generative AI does not lead to extra funding becoming available. Prices may also rise once tools become embedded in workflows.

Positioning generative AI as solving the economic and operational challenges facing many non-profit and grassroots organisations may distract from pursuing other, potentially more transformational, solutions that address the underlying causes of these pressures.

Early use of generative AI may, therefore, not lead to significant long-term improvements to non-profit and grassroots organisations’ ability to deliver services, remain financially sustainable and tackle social and environmental challenges at their root.

Non-profits and grassroots organisations are largely excluded from wider AI discourse

59% of organisations are not engaged in the broader AI debate, citing a lack of awareness of opportunities for involvement, not being asked to get involved, and resource limitations as the main factors.

Some recognise the sector’s unique value in being well placed to advocate for communities at a time of significant change. However, there is limited consensus on its role, and little mention of the mechanisms at its disposal to co-ordinate efforts, hold government and industry to account, or influence decisions.

How [do] we [as non-profits] then get involved with key players to make those changes and be the people that shape AI?

Discussion group participant

Building connections, through convening organisations with both tech and social/environmental missions, is seen as beneficial. Such collaborations could set agendas outside the commercial influence of Big Tech. Funders can play a pivotal role in resourcing the sector’s ability to build resilience, capacity, and capability to explore AI’s longer-term implications.

Through understanding the impact of AI technologies on society, the non-profit and grassroots sector can become an active stakeholder in addressing the socio-technicalities of AI systems, rather than passive consumers of these technologies dominated by private sector interests. Without sector engagement, initiatives aimed at developing AI for public good cannot have an adequate understanding of what that might entail.

1. Introduction

As AI technologies evolve, they must do so in ways that benefit the whole public. To achieve this, everyone must be given the platform to participate in conversations about the design, use and governance of AI. The voices of non-profit organisations1, particularly those that are grassroots2 have so far not been adequately represented in decision-making about AI technologies at various levels.

Recognising this, JRF commissioned We and AI to explore the extent to which non-profit and grassroots organisations in the UK practically use generative AI tools, alongside how such organisations engage in the broader AI debate.

We sought to explore the following questions:

  • Why and how do non-profit and grassroots organisations engage with generative AI tools?
  • What are the main drivers and key elements that shape this engagement?
  • How do these organisations see their role in shaping the broader AI debate?

2. About this research

Both We and AI and JRF have previously written about the role of civil society in shaping the future of AI (Duarte and Kherroubi Garcia, 2024; Ibison, 2023). However, in this research, we consider our findings as the start of a wider conversation that connects multiple non-profit and grassroots decision-makers to share the reality of their organisations’ experiences with generative AI and wider AI developments. Considering the broader context in which each organisation operates, this research allowed them to connect and learn from each other in sharing their experiences.

Research on the adoption of digital and AI technologies within the sector3 has previously been conducted at a quantitative level, as by Charity Digital Skills (2023) and CAST (2024a). Our approach seeks to combine a short survey with in-depth discussion and interview elements to delve deeper into the specific motivations and experiences regarding organisational use of generative AI.

Highly publicised public releases of generative AI tools have recently provided new possibilities for automation to all types and sizes of organisations (OpenAI, 2024). It therefore seems timely to focus on generative AI as a starting point for exploring views and experiences from non-profit and grassroots organisations that represent and support the lived experiences of many marginalised groups and communities across the UK.

The perspectives of this part of civil society are often represented by think tanks and academia, by groups already focused on influencing national discourse around technology or by a small number of polled individual representatives who may not be close to the needs and challenges of particular communities. As such, narratives surrounding civil society’s views of AI are incomplete and may not reflect the concerns, hopes and realities of many mission-driven organisations.

Given the massive impact of AI technologies on society, we focus specifically on how non-profit and grassroots organisations with social and/or environmental missions are approaching generative AI, alongside their engagement with wider AI discourse. We explore the underlying factors driving such decisions and perspectives, including a consideration of the economic reality and key challenges such organisations face. This uncovers how existing power dynamics, values and social contexts affect the narratives around, perception of and use of generative AI within such organisations, as well as their engagement in wider AI debates.

3. Background

Within an uncertain UK and global economic climate, non-profit and grassroots organisations are facing economic pressures that are prevalent across the nation. These are compounded by the rising demand for many organisations’ services, as well as their funding models. The recent proliferation of generative AI technologies may therefore be seen as a solution to the sector’s increasingly stretched budgets.

AI context

Artificial intelligence (AI) refers to a broad range of technologies that produce outputs by uncovering patterns in huge datasets. The term ‘AI’ has referred to the research into producing such technologies since the 1950s, when a group of researchers proposed simulating the human mind with machines (McCarthy et al., 1955). In the following decades, AI has evolved to encompass many tools that we use in our day-to-day lives, such as spellcheckers, recommendation algorithms and face or voice recognition tools.

A recent change in the AI landscape was prompted by the launch of ChatGPT in November 2022. ChatGPT is an AI chatbot developed by OpenAI. It is estimated to be the fastest-growing consumer application in history (Hu, 2023). The tool catalysed greater public awareness about AI. Some estimates show that 95% of the public has heard of AI, and 66% can provide a partial explanation of what AI is (CDEI, 2024). The proliferation of such technologies has also resulted in the growing popularity of the term, generative AI.

‘Generative AI’ refers to AI-powered tools that take user inputs (usually text) and output some form of media (such as text, image or sound). OpenAI’s ChatGPT, Google’s Bard and Anthropic’s Claude are examples of text-to-text tools. Text-to-image generators include Stable Diffusion and Adobe’s Firefly. The versatility of generative AI tools means that they can be efficient workplace productivity boosters (Mittal et al., 2023). Claims of generative AI tools’ capabilities and performance, however, do not always live up to their promises as generated information can be limited, biased and false (Hsu and Thompson, 2023).

The launch of ChatGPT marked a new era for generative AI tools. No longer were such tools restricted to internal industry use, nor were they too expensive for the wider public to use. Such tools are now publicly accessible, cheap or even free. They were also praised as having the potential to significantly boost productivity and add £31 billion to the UK per year (KPMG, 2023). The UK’s AI Strategy is thus heavily focused on ‘productivity, growth and innovation across the private and public sectors’ (GovUK, 2022).

Although the promises of increased productivity and economic growth are causing much excitement, including among governmental bodies (GovUK, 2024a) and industry players (McKinsey, 2023), there are many unanswered questions regarding the regulation and governance of these emerging tools. The UK Government published a framework on how to use generative AI tools ‘safely and responsibly’ in 2024 (GovUK, 2024b). However, this framework is not legally binding and the lack of enforced regulation has raised concerns that we might see scandals similar to the introduction of Horizon to the Post Office (Benett, 2024), or the predictive standardisation algorithms used in grading A-level exams in 2020 (Coughlan, 2020).

With this context in mind, in what follows, we suggest some specific factors influencing the role of AI across UK non-profits.

Generative AI tools are heavily marketed to the sector

A key marketing strategy for AI companies is to stress that such tools boost productivity (Nielsen, 2023). For non-profit and grassroots organisations, gaining an edge in their workforce productivity is crucial. Generative AI tools can both automate simple, time-consuming tasks, such as data entry, as well as support more research-intensive tasks, such as preparing fundraising bids (Thirdsector, 2024). 

Fundraising platforms have already begun to integrate AI into their processes to enhance their donation pages; JustGiving has recently integrated generative AI into its platform (Blackbaud, 2023). Saving time in these areas could allow non-profit and grassroots organisations to focus on non-admin or operational activities, including on-the-ground delivery or beneficiary engagement.

The UK appears to have mixed opinions about AI

Awareness, opinions and expectations about AI amongst the UK public are mixed. Whilst people think that AI may be beneficial in certain applications, including access to healthcare, improving their shopping experiences, or increasing their access to learning or education, over one-third of UK adults think that AI will not amount to an overall positive benefit to their lives (Harris et al., 2023).

Some groups are resisting the roll-out of these tools on the basis of violations of human rights and civil liberties, abuse of power, lack of transparency, and weakened regulations for technology companies (Big Brother Watch, 2023; No Tech For Tyrants, 2022). Others are worried about the loss of ‘humanness’ and highlight the need for regulation in order to be able to trust AI systems (Harris et al., 2023). With these concerns in mind, governments are developing strategies to ensure AI technologies protect citizens, society and the environment (GovUK, 2019; Kremer et al., 2023).

Lack of digital and data infrastructure to make the most of AI

Many UK non-profit and grassroots organisations lack the digital and data infrastructure to support the adoption of AI such as computers with up-to-date software and IT infrastructure. In a survey of over 500 charitable organisations, 20% said their IT provision was poor and 22% said it was a regular challenge (Amar and Ramset, 2023). Consequently, organisations that understand the benefits of generative AI may simply be unable to use these tools effectively or securely. Notably, larger charities do not face this issue to the same extent (CAF, 2022b).

Wider civil society currently lacks a voice in AI discourse

To date, wider civil society has struggled to find its voice in discourse regarding AI and has therefore had very little influence in the UK’s AI landscape and developments. The absence of non-profit and grassroots organisations from such discussions is best exemplified by their exclusion from the UK Government’s flagship AI safety summit in November 2023. Hosted at Bletchley Park, the summit featured representatives from across the AI industry and government, alongside a minority of civil society organisations whose work already focused on AI (Ada Lovelace Institute, 2023a). This distinct lack of representation from the wider non-profit and grassroots community led to an open letter co-signed by over 100 individuals arguing that ‘the communities most affected by AI have been marginalised by this summit’ (The Open Letter, 2023).

Meaningful engagement with wider non-profit and grassroots organisations, particularly those whose activity does not focus on technology, could widen current discourse to focus beyond the technicalities of AI systems and consider the socio-economic and political contexts within which AI tools are developed and used (Dignum, 2019).

Non-profit and grassroots organisations also have a role in holding government and industry to account. As such, their input into AI discourse could serve as an accountability mechanism that shifts AI developments from solely prioritising profit to a range of advancements that serve much wider communities and social goals. Combining both technical and social perspectives can support the development of a deeper critique of the responsible and fair use of such systems.

Economic context

Much like the rest of the economy, UK non-profits and grassroots organisations have been subject to inflation-related macroeconomic pressures (Newton, 2023). The legacies of austerity and the pandemic have affected organisations across the nation. In addition, non-profit and grassroots organisations are subject to unique challenges due to how they operate, and the role they play in society (Newton, 2023).

Donations to charities are slowing down

Owing partially to a UK-wide reallocation of resources during the pandemic, £10.7bn was given to charity in 2021 compared to £11.3bn in 2020 (BBC, 2022). This trend of decreased public donations to charitable organisations continued beyond the pandemic, with 4.9 million fewer donations made in 2022 (CAF, 2022a). Considering that inflation rates did not peak (at 19.1%) until March 2023, the situation is much starker in real terms. 

The UK’s giving habits have not recovered since the pandemic and, where they have, they have been directed towards the war in Ukraine (CAF, 2022a). Whilst not all mission-driven organisations receive public donations, this trend still highlights the negative economic impacts of inflation and squeezing incomes on the sector more broadly.

Demand for services is ramping up

Public demands on non-profits are growing. The ongoing cost of living crisis and cuts to local services have made beneficiaries reliant on non-profits for basic essentials. ​​The number of low-income households unable to afford food, and who are behind with their bills, tripled between 2019 and 2022 (Earwaker and Johnson-Hunter, 2023). In response, the number of emergency food parcels distributed by the UK’s largest network of food banks grew by 37% between 1 April 2022 and 31 March 2023 (Trussell Trust, 2024). 

Inflationary pressures and decreasing donations affect non-profit and grassroots organisations’ budgets, and the continuing cost of living crisis increases demand for charitable services, stretching such organisations even thinner.

Recruitment is a challenge

For organisations across the UK, staff recruitment has been a challenge for some time; non-profits have not been immune to this. In August 2022, 80% of small firms faced difficulties recruiting applicants with suitable skills (Russell, C., 2022). On the labour supply side, the proportion of people not in employment and not seeking work has not returned to pre-pandemic levels (ONS, 2024). Both the COVID-19 pandemic and Brexit have been used to explain the UK’s skills shortage (Francis-Devine and Buchanan, 2023).

For the non-profit sector in particular, this has meant estimates of 7 in 10 organisations struggling to recruit staff (Larkham, L. and Mansoor, M., 2023). Recruiting and retaining volunteers has also become a challenging area. Fewer people volunteered than normal throughout 2021 and into 2022 (CAF, 2022c). This has been the trend for over a decade but was accelerated during the COVID-19 pandemic (DCMS, 2023). Without volunteers, over 90% of non-profit and grassroots organisations with incomes under £50,000 would cease to function (Kenley and Larkham, 2023).

Trade-offs between funding and mission

Facing economic uncertainties, non-profit and grassroots organisations have increasingly admitted to delivering services that do not directly align with, or benefit their mission, having to prioritise their financial security over their charitable objectives (Clay et al., 2024; Young and Goodall, 2021). As a result, funding applications have become significantly more competitive, creating a climate of competition over collaboration (Young and Goodall, 2021).

In summary, a confluence of complex economic, social and technological factors serves as a backdrop and motivation for this report. Whilst economic pressures will affect almost all sectors, non-profit and grassroots organisations face many unique challenges that make them particularly vulnerable. In an industry where cost-cutting may mean the difference between continuing to help lives or closing for good, such organisations could be prime beneficiaries of the sort of automation generative AI tools can provide.

4. Collecting the data

This research followed a mixed methods approach over 3 main phases: survey, discussion groups and interviews. All data was collected between February and April 2024. For full details of the research methodology please see section 8 of this report.

Organisations with a broad range of social and environmental missions were invited to take part in the survey, through targeted approaches via social media, email, relevant third-party newsletters and phone. Organisations were targeted based on variations in size, annual income, geographical location and mission area to ensure voices from across the sector were present in the research. Organisations with missions solely related to the adoption of digital and data-driven technologies, or tackling digital inequities, were not included in the research. It was assumed that such organisations were more likely to be already engaged in issues relating to AI.

In total, 51 different organisations completed the survey. The survey included questions on formal and informal organisational use of generative AI, potential benefits and concerns regarding generative AI, and awareness and engagement in the broader debate around AI. See the appendix for the full survey results.

Three online discussion groups were subsequently carried out, bringing together decision-makers from organisations that had completed the survey. In total, 16 individuals across 15 different organisations attended. Discussion groups were guided by a series of questions to explore survey responses in more detail. To read the discussion group guide, please see section 8 of this report.

Finally, 5 individual online interviews were also conducted to explore and clarify findings from the survey and discussion groups, as well as to dig deeper into some of the themes that were identified from the initial findings. To read the interview topic guide, please see section 8 of this report.

The data presented in this report is a combination of the findings from each phase.

Research limitations

This research provides a snapshot of the activities and opinions of a sample of non-profit and grassroots organisations over 3 months in 2024. We identified some limitations to this work.

Generalisation

We intentionally invited a specific group of organisations with social and/or environmental missions. The overall sample size was small, limiting the generalisability of the findings, which cannot be taken as representative of the whole sector. Despite efforts to stress that this research was interested in both the use and non-use of AI, the survey had fewer organisations that did not use generative AI. 

Some organisations declined to participate, citing no generative AI use and uncertainty about how they could then contribute to the research. The lower rate of non-generative AI users choosing to engage in the research may have disproportionately skewed the findings of high levels of use in the sector.

Focus on generative AI

The term ‘AI’ is often broadly used to describe any digital technology in public discussions. This research explicitly focuses on generative AI, rather than any AI tools, in order to narrow the scope. Generative AI tools are also easily accessible to non-expert audiences. Yet, during the research, many participants used the terms AI and generative AI interchangeably.

We have still included data where participants talked generally about AI to highlight that people’s use and understanding of these terms are constantly changing. With the increasing integration of generative AI tools in existing software, the definitional boundaries surrounding such terminology will continue to evolve.

5. Key findings

This research finds that 78% of organisations surveyed report using generative AI in some capacity, although use cases varied considerably. Discussion groups and interviews explored the nuances regarding factors influencing the use or non-use of these tools. Organisations’ engagement with external AI discourse was also examined.

We analysed data from all research methods, and have grouped key insights related to the research questions into 4 thematic areas:

  1. key drivers of generative AI use
  2. generative AI governance issues
  3. generative AI readiness
  4. perspectives on the use and design of AI.

Key drivers of generative AI use

Survey results show a clear trend of organisations wanting to increase their efficiency and productivity. In discussion groups, the topic of accessibility also surfaced. Here, we explore these 2 key drivers of AI use in more detail.

Generative AI for efficiency and productivity

In the survey, 71% of organisations state that their main motivation for using generative AI tools is to work more efficiently; this is followed closely by the need to save money by delegating tasks to save labour costs. Generative AI is being deployed across a variety of different tasks, most notably in advertising, marketing, PR and communications, alongside research and development. There were a variety of use cases from organisations, from producing text for reports and media content, helping with admin and project management, to generating ideas on a new topic or creating images.

Many organisations are excited by the opportunity to use generative AI tools, which they frame as free cost-cutting devices easily accessible in a highly uncertain financial environment. Some are using generative AI with enthusiasm, seeing these tools as able to lighten their workload and help deliver their services.

A local befriending organisation said:

“As you can imagine, there’s about 160 people we’ve got to call every week, and we’ve got a team of about 20 volunteers. So, we just use AI to generate prompts for chat, so that they’re not asking every single week, ‘Did you watch Coronation Street last night?’”

An interviewee said:

“We see AI as a massive opportunity, as our economic predictions have been bang on so far sadly and that we will not see any improvement in funding in the foreseeable future [...] AI is possibly the biggest opportunity we’ve seen in decades.”

Additionally, these tools are seen as something novel to explore, rather than having a clear rationale for use, ‘it’s there, rather than by design’ (discussion group participant); just under half (48%) of survey respondents are using these tools to experiment. While smaller organisations tend to use free and widely available tools, a couple of large organisations are using bespoke AI tools, either developed in-house or procured specifically based on their research and data analysis capacities.

Generative AI use is often driven by changes to default settings in existing software, as some generative AI tools are becoming embedded into existing platforms that organisations use. Embedding AI tools within existing platforms is not a new trend; such an approach aims to enhance users’ experiences of platforms or tools, with promises of saving time and sparking inspiration. This ‘there-by-design’ concept may enhance people’s curiosity to test out or experiment with these tools. Yet it also raises concerns about agency and informed decision-making.

Over time, it may become difficult for organisations to make critical decisions or opt out, as changing platforms might be time and resource intensive, made worse by the fear of losing existing work due to a lack of collaboration or interoperability between different software.

Generative AI and accessibility

Accessibility was a recurring theme in discussion groups; many organisations consider generative AI tools as helping to ‘level the playing field’ by removing barriers in administration and service delivery. The accessibility benefits of AI are outlined in various ways.

Organisational staff with accessibility needs are using generative AI tools to support them to work more effectively, in similar ways to a personal assistant.

A discussion group participant, director of a community organisation, with no income, said:

“We [the organisation] are very new, so I [as the founder] use AI every day. It’s my personal assistant. I also have disabilities, so I use AI to help me in regards to being more independent with running the organisation. So, for me, the benefit of using it is that I’m actually able to deliver for my members, I know I wouldn’t be able to deliver as well if I didn’t have the support of AI.”

Others can make their services more accessible to their beneficiaries through using generative AI tools. This includes generating closed captions, creating written summaries in an accessible language, using automated translation for non-English speakers, and generating alt-text for images posted on social media accounts. Some organisations have even extended their services by using generative AI tools. For example, an organisation that has made their AI chatbots available to use 24/7, means beneficiaries, many of whom work multiple jobs and cannot reach the organisation during working hours, can access information about their services at any time.

A discussion group participant, head of AI/digital for a large charity said:

“A lot of people who might be working two jobs or working one job aren’t free to get on the phone for hours and hang on hold or go to an office during working hours. So, if you have the ability to serve people digitally, you have the ability to serve people at the time that suits them, which may not be office hours.”

Yet across all accessibility benefits that participants cite generative AI as supporting, wider economic constraints drive the reasons for use. Generative AI tools are seen as solutions to save costs, providing services that would normally be financially unobtainable for many organisations, such as translation services, proofreading or night staff. Organisations make efficiencies by saving time when delivering services and not recruiting additional staff. Nevertheless, we can question whether such cost efficiencies and accessibility benefits are overshadowing ethical considerations organisations could consider when deciding whether to use generative AI, paying particular attention to whether the use of these tools aligns with their organisational mission.

Considerations

Our exploration of how and why non-profits and grassroots organisations are using generative AI has led to the following recommendations for consideration.

Non-profit and grassroots leadership

  • Evaluate how financial or budget resourcing and decision-making might change in the context of a growing reliance on generative AI tools, if prices are raised or access to such tools is restricted.
  • Consider the impact on longer-term service delivery and staff development of using generative AI to replace human interactions.

Funders and/or supporting bodies

  • Consider to what extent efforts to support non-profits to use generative AI to increase productivity might be better spent on finding solutions to the challenges which are driving the need for greater efficiencies.
  • Consider what support or mitigations are needed for organisations who are not using generative AI or are not able to use it as effectively as others.

Sector collaboration

  • Work together to model how the longer-term value of generative AI can be calculated, rather than focusing solely on instant time or cost savings. For example, if generative AI is being used to speed up both bid writing, and bid assessing, does this result in time savings if more bids need to be written and evaluated to compete for or award funding?

Generative AI governance

In order for generative AI to be used responsibly within any organisation, governance structures, in the form of ethical principles, frameworks, policies and guidelines, are needed to safeguard stakeholders and protect the organisation (Mucci and Stryker, 2023). In 2021 the UK signed up to UNESCO’s international framework to shape the development and use of AI technologies, based on a human rights approach to AI (UNESCO, 2021). 

There is a ‘pro-innovation approach’ (GovUK, 2023) to AI policy in the UK which means that, unlike the EU, the country favours relying on existing regulators to police the use of AI by organisations, instead of passing new laws (European Parliament, 2024). It is therefore largely left to individual organisations to determine how to use AI responsibly. Against this background, we explored how non-profit and grassroots organisations are approaching ethical challenges related to the use and governance of AI.

Generative AI policies

To understand how generative AI is being used in organisations, we looked at the ways it is managed at an organisational level. There is a large difference within the sector concerning which organisations have generative AI policies and guidelines in place, and which do not.

Most organisations (73%) surveyed do not have AI policies or guidelines in place. Organisations with £1 million+ in income are more likely to have AI policies or guidelines already in place, compared to organisations who have smaller incomes.

Organisations with larger annual incomes can spend more time and effort on creating AI policies or dedicate additional resources to commission external support with this task. This in turn can lead to more strategic and defined organisational use of generative AI. 

A discussion group participant, head of AI/digital at a national charity with £1 million+ income, and generative AI policies in place, said:

“We have a statement on how we use it. We have some risk assessments on what use cases we might actually have.”

Larger organisations also have the resources to create specialist roles dedicated to AI and digitalisation. Such roles are essential in creating direct lines of leadership and accountability between strategy and implementation. As AI cuts across, and can be integrated into, many functions of an organisation, dedicating responsibility and resources to a specific AI role promotes and allows for a wider understanding, integration and adoption of relevant tools across an organisation. 

A discussion group participant, head of AI/digital at a national charity with £1 million+ income, and generative AI policies in place, said:

“I was at an event on AI in recruitment, where we were looking at how AI is being used in the systems that we’ve got for the front-end of our recruitment and how that ties into our EDI practice. So, we’re trying to kind of embed it a bit in all the discussions, rather than have it separately, because it is definitely the scary new thing that people are either really excited about or really terrified about.” 

Smaller organisations, however, tend not to have policies or guidelines in place, engaging with generative AI in experimental ways without much restriction. Smaller organisations also highlight how they lack the capacity, resources and knowledge needed to create relevant AI policies and guidelines. 

A discussion group participant, director at a local organisation with £500,001–£1 million income, and no generative AI policies, said:

“We are a grassroots organisation, we have no plan as to how we’re going to use this [generative AI] at the moment. It’s just lending itself quite nicely, and we’ve got absolutely no thoughts at the moment as to how we need to control that.”

A discussion group participant, senior leader of a specialised team at a national charity with £1 million+ income, and generative AI policies in place, said:

“I kind of need more capacity in my team. So, having someone who focuses on AI, and maybe that’s somebody just in general, in our organisations, about innovation. Having someone to look outwardly, to work with us, because it’s another thing that’s added to my team. And it [AI] is fine, it’s exciting but we have a tiny bit of time to be able to work with it, which feels not enough.” 

The divide between large and small organisations is clear and can be cut across economic lines. Larger organisations give greater scrutiny to inform organisational AI use, whilst smaller organisations tend to use AI without any organisational frameworks, often personal judgement serving as a guide. 

Balancing values with perceived imperatives

Many non-profits face a dilemma in navigating the ethical dilemmas that surface when exploring generative AI use.

While there is a broad consensus among participants that the sector as a whole is ‘values and ethics driven, and any decision making should be flowing from this understanding’ (discussion group participant), many organisations are still using generative AI, despite being aware of the ethical concerns surrounding these tools.

The survey reveals that organisations share multiple concerns about generative AI. Interestingly, organisations that are already using generative AI still highlight an array of concerns about such tools from data privacy and security, and accuracy, to representation and bias in data, and automated decision-making.

Rather than being unaware of the potential risks, it appears that, for some, the economic and efficiency advantages of using such tools may outweigh ethical concerns.

A discussion group participant in an organisation working with immigrant and refugee communities, using generative AI daily, with no generative AI policies, said:

“I feel like I still don’t have that trust yet with AI, maybe because I don’t know enough about it or we haven’t been told enough, just how secure our information is when using it. We’re quite apprehensive, but we still do want to implement it because it does help quite a lot with our admin and speeding everything up, you know, and helping as many people as we can because it cuts down a lot of time.”

Wider economic pressures appear to be so strong for some that generative AI is framed as a ‘necessity’ to support organisational survival. However, we can also question whether organisations are broadly concerned about generative AI in general, or whether they are worried that they lack the skills and resources to mitigate potential risks arising from these tools, meaning that their current usage feels uninformed or irresponsible.

A discussion group participant in a grassroots organisation, using generative AI daily, with generative AI policies in place, said:

“AI replaces at least 3 people... ethically and morally, I don’t feel great about it, but due to the lack of support and funding for charities, the choices I have are: there’s no organisation or to use what’s there.”

Some organisations are using generative AI, despite recognising how such tools are in direct opposition to their organisational mission. For some, this means limiting the use of such tools to certain tasks and avoiding some altogether.

An interviewee in an organisation working with immigrant and refugee communities, with no generative AI policies, said:

“We’re a very values-driven organisation. So whenever we’re designing something, whether that’s a piece of content or those translations it has to kind of encompass a lot of nuance... if I was to type ‘human trafficking’ into Canva, using the AI-generated image function that it has, I know it will come up with something that’s quite voyeuristic, quite explicit, and that’s not the kind of stuff that we use in our work. In fact, we’re very anti that and campaign against the use of that kind of imagery.”

Others are using generative AI tools frequently, despite acknowledging wider systemic damage that contradicts their organisational purpose; such tension is clearly a difficult trade-off. An environmental charity voiced serious concerns about generative AI tools’ environmental damage, resulting in increased water and energy usage and carbon release. Yet, this organisation still uses generative AI in their work daily.

A discussion group participant in an environmental charity, using generative AI daily, with generative AI policies in place, said:

“We are really concerned about the significant increases in water and energy usage that’s associated with the data centres that host these AI tools that we use.”

Whilst just under a quarter of organisations surveyed highlighted environmental concerns (24%), when exploring this further, discussion groups thought that this result is due to many organisations not being aware of AI’s environmental footprint.

Five organisations surveyed stated that they are not using generative AI tools; the main reasons cited for this are that their service users would not like it and that they are unaware of the potential benefits.

Only one organisation, which was engaged in an interview, is actively not using any generative AI. They see an inherent contradiction between using such tools and their organisational values, expressing ethical concerns on how biases in data used to train such tools may perpetuate existing biases in society.

An interviewee in a grassroots organisation working on community building and social justice, with generative AI policies in place, said:

“We have a sort of policy against the use of AI. Broadly speaking, we view AI as a potentially quite negative aspect of current cultural development, and so we plan to sort of policise [create a policy for] the lack of use of it.”

Overall, most organisations were aware of the many ethical issues and concerns relating to generative AI use. Despite these, most still engaged with such tools. This contradiction between organisational actions and values may reinforce the argument that, for many, the instant benefits these tools provide are too good to ignore, particularly when leading organisations in times of economic precarity.

The intention-action gap

The tension between the use of AI and an organisation’s missions, values or objectives is not too distinct from what has been dubbed the ‘intention-action gap’ in for-profit sectors (Skeet and Guszcza, 2020). This gap occurs when an individual’s values, attitudes or intentions do not match their actions (Aibana et al., 2017). The gap between organisations’ desire to act ethically and their understanding of how to follow through on their good intentions has been a popular point of discussion in various circles, from sustainability to innovation (Aibana et al., 2017).

In the AI field, this gap is striking. While it is true that AI ethics has been gathering interest from governments, industry (McKay, 2023), academia (Lynch, 2023) and the media (Corrêa et al., 2023), in an attempt to ensure that AI tools are deployed safely and responsibly, when it comes to putting ethical practices into action, there seems to be uncertainty on how to follow through on these principles (Munn, 2023). This may be because AI is a fast-moving and emerging technology, in which the full capabilities are not fully realised, and societal harms not fully understood, measured or mitigated. It is interesting to explore the extent to which an intention-action gap regarding AI exists within non-profit and grassroots organisations.

Whilst there is an expressed sense of inevitability regarding AI developments, participants struggle to understand and reconcile the technical benefits of using generative AI tools with the moral dilemmas that also emerge.

A discussion group participant, head of AI/digital at a large charity with generative AI policies in place, said:

“I think there’s a lot of tensions... I do ethics assessments and algorithmic auditing, and I think what’s the real challenge is trying to weigh up different ethical concerns against each other. It’s actually really difficult because you could say, on the one hand, this might help more people. But if it helps more people but increases the outcome gap even slightly, is that okay? Like, how comfortable are we with that?”

A discussion group participant, senior manager at a charity with no generative AI policies, said:

“And this whole idea of arguing against something that’s coming and then it’s more a case of then having conversations on how best to use it, rather than trying to stop the inevitable... That’s a societal thing because you know it’s coming, but how do we best apply it? How do we best use it ethically and morally?”

Individuals found it difficult to have organisational conversations about AI ethics and guidelines when there are varying levels of digital and AI literacy present in their teams, or when team members are particularly fearful of new technologies.

A discussion group participant, senior leader at a charity with no generative AI policies, said:

“There’s a lot of debate going on at the moment, not meaningfully because, ultimately, people are sort of brainstorming. It doesn’t feel like a debate... that’s going to have a meaningful action at the end of it, other than just the general, trying to raise understanding of what the hell that [AI] means, and what does that mean for us as an organisation?”

There has also been some criticism that the terms ‘responsible AI’ and ‘trustworthy AI’ are being used as buzzwords. Such terms are ill- or undefined, with little consideration as to how they are to be implemented and measured (also known as ‘ethics washing’), yet their usage persists allowing individuals and organisations to be perceived as value-driven (Browne et al., 2024). Whilst this framing and subsequent critique is especially prominent in the technology sector (Browne et al., 2024), we see that it has also been carried over into non-profit and grassroots fields. Some organisations in this research voiced their concern that organisational AI guidelines are perceived as simply tick-box exercises without leading to meaningful action.

A discussion group participant, senior leader at a charity with no generative AI policies, said:

“There’s the whole ethical piece around that, that comes in. We don’t want it as a tick-box, we want it as something meaningful... [AI ethics] is being used as a buzzword, ultimately. And I think we’re going to hear it again and again over this year, just because of the political cycle.”

Even if organisations are concerned about the potential dangers of generative AI tools and want to act ethically, a lack of clarity around what it would mean in practice is a challenge. Using generative AI in line with organisational values or mission becomes even harder when the majority of organisations in this research do not have generative AI guidelines or policies in place. For some organisations, truly following their organisational values might mean stopping using generative AI tools altogether. However, this may prove challenging given the financial incentives to use such tools and the ease at which they are embedded across common digital platforms.

Important questions remain as to how non-profit and grassroots organisations can develop realistic approaches to engaging with AI ethics, given the lack of resources or internal expertise. Yet it is paramount that the sector critically engages with such questions, developing strategies that explore the extent to which these tools can be used in line with their organisational values or mission statements.

Generative AI and trust

The question of trust in the context of AI is contentious. What trust means to different AI stakeholders, and how this meaning is applied in different contexts, will vary significantly. For AI innovators, increased sales may be an indicator of people’s trust in their products or services. For users of AI tools, trusting AI may be deeply entwined with wider narratives on AI, data, and technology.

Trust is also a question of accuracy, how accurate or robust outputs from AI tools are. Generative AI tools are often described as being prone to ‘hallucinate’, a term describing when a generative AI tool produces outputs that are false or inaccurate (Maleki et al., 2024).

Yet the term ‘hallucinate’ is somewhat misleading as it implies that generative AI understands language or the prompts it is given, but sometimes is inaccurate; whereas any results from generative AI tools which do not reflect a source of external objective truth simply reflect the tool’s function to predict ‘the likelihood of different strings of word forms’ based on the texts on which the system has been trained (O’Brien, 2023). This prediction is a feature of such tools which cannot be eliminated; as such, users cannot trust that generative AI tools will always produce accurate outputs.

However, non-profit organisations need to be trusted by their service users, beneficiaries, donors, funders and the wider public (Lalak and Harrison-Byrne, 2019). Therefore, the use of generative AI by the sector must not undermine its reputation. In what follows, we delve into 3 trust-related themes present throughout the research:

  • trust in generative AI outputs
  • trust in how data is processed by generative AI tools
  • disclosing the use of AI tools.

Trust in generative AI outputs

70% of survey participants who are using generative AI said they ‘somewhat’ trust outputs of generative AI tools while none completely trust outputs, only 23% rarely trust these tools.

When exploring this further, we found that trust was heavily dependent on the context in which the tools were used. This reflects research findings from other organisations. The Public Attitudes to Data and AI survey of over 4,000 adults in the UK revealed that how trustworthy AI is considered depends on the organisational context within which such tools are deployed (DSIT, 2023).

For some common tasks, such as generating a funding bid, or drafting social media posts, many organisations are critical of the outputs produced and review the material. This is largely due to concerns around biases, accuracy or misinformation, as well as ensuring their organisational ‘ethos’ is prominent within the generated text.

A discussion group participant, senior leader of an environmental charity with generative AI policies in place, said:

“Misinformation is a big concern for us, in terms of the credibility of what we put out with generative AI tools.”

A discussion group participant, director of local organisation working on health equity with no generative AI policies, said:

“When you’re trying to apply for funding, you want the human, emotional aspect to try to come through a bit more in the applications... generative AI is going to be using, perhaps, fancier language which may not or may sway the person looking at your application.”

If an individual is confident in their knowledge and understanding of the context within which generative AI is being used, they can judge the accuracy and quality of the output, before deciding if they trust the tool.

An interviewee, senior leader at a national charity with generative AI policies in place, said:

“if I was to use AI as a partner in say quantum physics. This is a topic which I know very little about. I would not feel confident that I could trust the output that I could get out of the AI there because I would have more difficulty verifying it and I wouldn’t be able to... interpret what was coming back... I can only trust it where I’m supervising it to do tasks where I’m pretty confident what good looks like.”

Furthermore, fears of losing the trust of others due to not being seen as ‘authentic’ guide some organisations’ generative AI use.

A discussion group participant, director at a local community organisation with no generative AI policies, said:

“I would not actually want to use it for a website either, based on the level of trust that we actually want persons to have in us when they have a look at what it is that we are offering. We don’t necessarily want a situation where people look and all they can see is something that looks completely fake.”

When organisations spend time reviewing and amending the input and output of generative AI tools, additional time and expertise are needed to do so. This process is often referred to as ‘human-in-the-loop’. However, as this phrase seems to ascribe responsibility to the machine, it can perhaps better be thought of as what American computer scientist Ben Shneiderman refers to as ‘humans in the group; computers in the loop’.

Trust in how data is processed by generative AI tools

There was a lack of trust in generative AI tools deployed within sensitive contexts, such as working with survivors of domestic abuse or appealing an immigration decision. The level of scepticism for use in such cases is partially connected to issues of consent around personal information and privacy, especially when external parties are involved.

A discussion group participant, head of AI/digital at a national charity with generative AI policies in place, said:

“People are really, really worried about disclosing data; where that data is being shared, what kind of services outside of [the organisation] that that information might end up with. I think it’s a common theme with AI.”

A discussion group participant from a local organisation with no generative AI policies said:

“I always say, ‘Anonymise it. Don’t use any names when you’re using the AI to help you write something. And then go back, as a person, put in the names and the specific identifiers later’.”

There is also an assumed relationship between trust and charitable giving (Chapman et al., 2021). Some non-profit organisations may seek to be trustworthy by not using AI tools to protect their reputation and credibility, and ultimately attract more donations or funding. Again, we see the influence of financial drivers on AI use. However, decisions on the safe and appropriate contexts for generative AI use often depend on the knowledge, skills and understanding of whoever is using it, particularly within organisations with no formal policies or guidelines in place.

An interviewee, a senior leader at a local community organisation with no generative AI policies, said:

“So things like funding bids we don’t use AI... I mean, it would be very helpful, it would save a lot of time, but at the moment I don’t think AI yet can reflect [the authenticity of the organisation]. So, I think the danger of fully embracing AI is we would potentially lose some people out of fear or out of misrepresentation. I think that’s the biggest concern from a trustee point of view in our organisation.”

Whilst many organisations do not blindly trust generative AI tools, and draw some boundaries around use based on trust, it is unclear how accurate and appropriate such decisions are. For organisations without organisational AI policies, these choices are dependent on an individual’s level of digital privilege or expertise, personal judgements or relative interest in AI.

Disclosing the use of AI tools

When examining whether organisations disclose when they use generative AI tools, there is a mixed picture. Organisational policies relating to disclosing generative AI use (making clear when content or output has been artificially generated, manipulated or used for decision-making) were key discussion points, with diverse views and understandings on when and how to disclose generative AI use (Ali et al., 2024). Given that formal organisational policies on generative AI use are largely lacking, this also opens a grey area with regard to disclosure.

Of the organisations surveyed, 15% disclose their use of generative AI tools. Many see disclosure as a key part of their organisational values, building trust through transparency.

An interviewee at a local organisation with no generative AI policies said:

“We value authenticity so much that it [not disclosing] would damage our relationship with them [our beneficiaries] if they felt we were not true to our ethics. So I think there is a very important part of authenticity when thinking about using AI and being strategic and how we use it.”

Another 38% feel that disclosure is not relevant to their organisation’s use of AI tools. Here, individuals often reported that they do not disclose use when the AI tool is already embedded into existing software, or if the tool is used for internally facing activity.

A discussion group participant from a community building organisation, with no generative AI policies, said:

“Do people disclose that they’ve used a spell-checker in a document? Do they disclose that they’ve had an alarm to remind them when there’s a call?... how many tools do we use during our day, that we never even think twice about needing to disclose to somebody that we’re using that? So, I think it is a good question and I think it depends on how people are using it.”

For some organisations, the decision on whether to disclose use also depends on the perceived risk of the context in which the tools are used. For perceived lower-risk tasks, such as generating ideas, many deem disclosure unnecessary.

An interviewee from an organisation working with immigrant and refugee communities, with no generative AI policies, said:

“I don’t think there’s anything on our platform that has been, had such an influence from AI, that we would need to disclose it… it would kind of be used to inspire something but even we would then go and change most of it, so the AI influence on it would be, maybe 1%. So no, we don’t.”

Some organisations actively reject the need to disclose AI use altogether, including when producing written content. Many saw disclosure in these cases as undermining their efforts in overseeing and editing the content.

A discussion group participant from a local grassroots organisation with no generative AI policies said:

“No, I don’t disclose it because I work at it pretty hard. You know, it’s not a copy-and-paste job. It’s like having an assistant who’s doing a portion of the thinking. I’m still pretty much doing the thinking, I’m leading with the questions... I feel like it’s just helping me to sort out my thinking. Like, I’m working pretty hard, so I still think that’s just my work.”

Such divergent views on disclosure tend to be based on differences in organisational beneficiaries. If organisations believe that their beneficiaries are vulnerable and/or may be sceptical about technology, they tend not to disclose their use. Many focus on what AI enables them to do, rather than the fact that they use AI to do it.

A discussion group participant from a local organisation with no generative AI policies said:

“I have fears about [disclosure], mainly because... we currently have so many beneficiaries and members of the community that are really fearful of technological change. Many of our beneficiaries are older and potentially vulnerable and... I worry that they would see something like that and be frightened about the way that our charity is operating.”

Other organisations working on public campaigning, influencing or policy, and which may engage with a broader range of audiences, are more concerned about not disclosing their generative AI use. This is due to fears that non-disclosure would discredit their organisations. The output such organisations create using generative AI tends to be for larger audiences, making disclosure even more critical concerning being seen as trusted organisations.

An interviewee from a campaigning organisation with generative AI policies said:

“We probably never want to be opaque about our use of AI, particularly in image generation... I could see how it would be tempting in some contexts... for example, we want to present ourselves as being a more diverse organisation than we actually are and we’re going to generate AI people [for organisational flyer]... I feel like that would be skating us right over the edge of what would be acceptable to beneficiaries and supporters.”

An interviewee from a community building organisation with generative AI policies said:

“Public trust in charities is absolutely fundamental to what we do... we can behave in a way that does not damage that trust. So when you’re using things like AI imagery, [other organisation] have already run into a little bit of a trouble where they used [AI-generated images]. Everyone looked at it and assumed it was real and then got a bit hissy when they realised that it wasn’t real.”

There is a strong relationship between disclosure and trust, and while some organisations are aware that disclosure connects to their values, this is not the case for all. Some may not have considered the connections between disclosure, trust, transparency and authenticity; others may simply see non-disclosure as the right thing to do. It is important to clarify, however, that these organisations do not see non-disclosure as an act of deception. Rather, non-disclosure is presented as an act of care, to avoid panicking beneficiaries who may be fearful of technologies.

It is not the purpose of this report to evaluate organisations and identify ‘correct’ practices, and we acknowledge the diverse circumstances that non-profit and grassroots organisations operate within. Many often face tough decisions in using generative AI, and disclosure in other sectors is not common practice. Yet the question remains of what the future of disclosure might look like for trusted organisations, as AI becomes more embedded in existing tools.

Considerations

Our exploration of non-profits and grassroots organisations’ governance structures regarding using generative AI, including policies, disclosure and trust, has led to the following recommendations for consideration.

Non-profit and grassroots leadership

  • Consider having someone with responsibility for AI governance across all elements of administration and service delivery.
  • Prioritise identifying with communities how the use of generative AI tools might affect trust and service delivery, particularly in relation to transparency.

Funders and/or supporting bodies

  • Consider if there is a service which could be provided to support smaller organisations with relevant generative AI policies without adding too much operational burden.
  • Build on existing resources for the sector specifically as it relates to generative AI, to ensure that even smaller organisations are aware of the potential risks and benefits and have the guidance they need.

Sector collaboration

  • Continue to discuss how to navigate the trade-offs between values and imperatives in the ways generative AI is used, not only as purely internal decisions, but also as ones on which the sector as a whole can take positions. Consider developing frameworks which do not preclude supporting organisations with different priorities and missions, such as organisations who do not wish to use generative AI, and those who have found genuinely transformational uses for tools.
  • Consider what can be learned (particularly from environmental justice organisations) to develop approaches across the sector for knowing how to respond to the environmental impacts of AI.

Generative AI readiness

AI readiness has been described as an organisation’s ability to use AI in ways that add value to the organisation (Holstrom, 2022). It can include areas such as digital and data infrastructure, skills, organisational culture and mindset. There has been previous research conducted which surveyed charities’ use of AI and the capacity of organisations to adapt to and benefit from AI (Amar and Ramset, 2023). We wanted to dig into the perceptions of what is needed to adapt specifically to generative AI, and how organisations are managing this challenge. Specifically, we explore:

  • training on generative AI
  • leadership and frontline perspectives.

Training on generative AI

The pandemic catalysed changes in how non-profit and grassroots organisations used digital technology to deliver their services: 82% of organisations said they had to invest in new technology to adapt to the pandemic, which drove demand for digital skills required by staff and volunteers (NCVO, 2021). Generative AI is now also being used to help solve problems related to the increasing economic pressures that organisations are facing. A similar trend can therefore be predicted in which organisations look to upskill their workforce to use generative AI more effectively. This also reflects the narrative that there is a race to use AI tools to secure your job, that ‘AI won’t replace humans, but humans with AI will replace humans without AI’ (Lakhani, 2023). 

Previous research has also highlighted the lack of AI training; in CAST’s AI survey, 51% of respondents had not received any training or support around AI (CAST, 2024b). Through our engagement with charities, we uncovered 2 axes of discussion when it comes to training:

  • AI training content
  • AI training creators.

AI training content

The majority (69%) of organisations using generative AI tools have not received formal training. Despite this, most organisations expressed a need for such training. When asked about the content of such training, 2 perceived requirements emerged. Some organisations want operational training on how to use generative AI tools:

A discussion group participant, director at a disability justice organisation, said:

“I want a person talking me through the first steps of, ‘Okay, you sign up, and then these are the… this box does this for you... and this is the area where you do X, Y, Z’. It demystifies how scary and difficult it will be. So, I think, yeah, that practical 1–2–1 support, but also just having a general, like, okay, these are the things that it’s able to do on a broad umbrella, but then also examples.”

Many participants are conscious of the need for a more expansive understanding of AI as a socio-technical phenomenon, referring not just to technologies but also the complex social interrelations with which they are imbued. Some organisational leaders also expressed the need for training on the technical foundations of AI and data, to enable critical thinking about how to respond to tools.

A discussion group participant, head of AI/digital at a national charity, said:

“Now suddenly, everybody is extremely excited about AI and wants to skip all these other steps. But the risk there is that some of the foundational understanding in what computing is, what data analysis is, without having that in place, there’s a lack of interrogation that might be happening in the output that you’re getting from a generative AI tool, or a lack of awareness of where you might be encountering disinformation.”

A discussion group participant, head of AI/digital at a large organisation, said:

“The training courses that are out there are either really basic or really technical, and I actually need something a bit in-between that’s going to take our context into consideration.”

AI training creators

Faced with stretched budgets and an expressed need for training on AI, freely accessible training resources are relevant to our research participants. However, free resources could be problematic for 3 reasons:

  • Discoverability: there is a lack of awareness around ‘training available that’s low cost because charities don’t have a lot of money’ (senior leadership, local organisation).
  • Tailoring: The free resources are not personalised and, therefore, not necessarily effective within an organisation’s specific context.
  • Motivation: The reason behind many training resources is the promotion of AI tools to potential users and, as such, they are not as balanced as they need to be.

Unpacking this first point, many organisations remain unsure as to where to go for training. The quality of the training available is also highly variable. In CAST’s 2024 AI Survey, of those who received training on AI, only 6% felt that it was sufficient(CAST, 2024b).

A discussion group participant, director at a community building organisation, said:

“I might not just go to YouTube and look about it [learning about AI]. So, yeah, where’s that training going to come from and who are we going to trust and what are the stages of it?”

This concern shows a lack of confidence in auto-didactic (self-teaching) methods for learning new tools. It could be useful to explore the extent to which this response reflects an already overstretched workforce struggling to keep up with releases of new tools, or a feeling of disempowerment when it comes to grappling with AI technologies generally.

It also raises the question as to who and what defines a trusted training provider. Two opposing views emerge throughout the discussions. Some argue that the companies developing AI tools should also provide training.

A discussion group participant, director at a disability justice organisation, said:

“Do they [tech companies] have a representative that could come and do a session... how we can get improved access so that they’re giving direct support, whether that looks like having some kind of discount code or having those sessions where they can come and ask questions and support people to set it up.”

Others oppose the idea; they perceive technology companies as ‘the bad guys’ who may embed their companies’ profit motivations into training materials. Indeed, the UK lacks nationwide AI literacy initiatives, which means that there is a market gap for tech companies to provide free explainers and training resources as part of their content marketing strategies (Duarte, 2024). Research participants who oppose tech-company-led training place the responsibility on the non-profit sector to develop or procure resources for training, alongside providing spaces for sector-wide discussions to highlight shared challenges and case studies.

A discussion group participant, director at local organisation, said:

“The charity sector should be... producing something with some credentials. But it’s certainly not our bag to do that. We rely on the bad guys to kind of be coming up with stuff like that, and it’d be good if they didn’t look at it as an income-generating opportunity.”

Questions remain around to whom the sector can turn for unbiased AI training resources. Interestingly government agency support is not mentioned at all as a possibility. Whilst big tech firms may hold the technical knowledge, they may not be best placed, nor may their business models or marketing strategies align with what non-profits would find most useful.

Leadership and frontline perspectives

Larger organisations point out a disconnect between leadership and frontline staff’s perspectives on generative AI. Leadership, who mainly decide on AI policies and strategies, are described as being seen as ‘too cautious’ regarding such technology. In contrast, frontline or junior staff are pressed for resources and often use generative AI tools in response.

A discussion group participant, head of AI/digital at a large organisation, said:

“...wider awareness within our leadership teams, around risks of AI, are very strong. And I would say, at senior level, there is more trepidation than there is in some more, kind of, people on the ground using it who are definitely playing around with it, trying it, doing more things with it than at leadership level.”

Leadership’s relative caution around using generative AI may indicate that they place greater weight on ethical concerns or consider the potential risks of reputational damage and violation of legal frameworks (such as those around data privacy), the consequences of which would most likely fall on their shoulders.

In contrast, it is reported that more junior or frontline staff engage in more experimentation with generative AI tools. This unsanctioned or ad hoc generative AI use (also called ‘shadow use’) is often used to free up time from tasks that can be spent delivering in-person services. Shadow use of AI tools is not unique to the sector; it has been reported across the private and public sectors as becoming more of a concern (Salesforce, 2023; GovUK, 2024c). Almost a quarter (24%) of survey respondents report using generative AI tools that are not formally approved by management.

However, as most organisations (73%) do not have formal AI policies or guidelines, defining generative AI tools that have been ‘formally approved’ is difficult. For smaller organisations, ‘formal approval’ may consist of mentioning using a tool in a conversation, rather than receiving formal sign-off as part of a traditional policy process that many larger organisations may follow. Moreover, many generative AI tools are embedded in already existing platforms non-profits use, making it difficult to distinguish or be aware of when a tool is being used.

A discussion group participant, senior leader at a large charity, said:

“We use Microsoft Edge, so everything is kind of embedded in there. How aware people are, AI is embedded in most of that, I would say, depends on people’s interest in AI.”

A discussion group participant, head of AI/digital at a large charity, said:

“There’s not complete insight into how people are using it day-to-day. In the offices, people may be using it in quite extensive ways. We’re not 100% sure.”

Despite organisations understanding the risks of using generative AI tools, cases of shadow AI use reveal further questions about accountability. If harm were to occur from AI use within an organisation, who would be responsible? Organisations’ perspectives on accountability relating to generative AI use were not explicitly explored within the research. Yet there is a growing tension between the quick relief found from using generative AI tools to ease burdens, especially by frontline staff, and (a lack of) considerations on ethical implications and monitoring and governance structures.

Considerations

Our exploration of non-profits and grassroots organisations’ generative AI readiness, including training needs and differences in staff perspectives, has led to the following recommendations for consideration.

Non-profit and grassroots leadership

  • Consider how to learn together as an organisation and bridge gaps in attitudes. The risk of generative AI causing divides within organisations based on views of new tools needs to be addressed collaboratively, and ideally with input from served communities.
  • An example of an opportunity for this is the Data and AI Civil Society Network.

Funders and/or supporting bodies

  • See what training can be approved or created for non-profit organisations which is not purely designed by private sector and technology companies and is based on considering a socio-technical perspective AI.

Sector collaboration

  • Consider how those organisations with more experience and resources could be supported to play a role in supporting smaller or less confident organisations to learn about how to use and build generative AI solutions where appropriate, and signpost useful resources.

Perspectives on the design and use of AI

Non-profits and grassroots organisations are often particularly well placed to observe and understand the impacts of social and technological change on the communities they serve and represent. Such organisations often have great expertise in some of the areas affected by generative AI, and AI in general, including labour and employment, mental health, education, migration and justice. As such, the perspectives of non-profits and grassroots organisations are critical to guiding decisions about how AI can be designed and used for the public good. Therefore, we examine the extent to which organisations can contribute to the public discourse on AI development.

Involvement in wider AI discourse

We explored whether and how non-profits and grassroots organisations engage in broader AI discourse and debates. Two key contexts help to frame how such organisations may see the current debates surrounding AI, alongside how they conceptualise their role in wider discourse.

  1. Media coverage and public perception

    The media play a pivotal role in shaping narratives around AI, by framing what is discussed publicly. News articles often sensationalise AI with a focus on warnings to readers. They can also prioritise amplifying private companies’ assertions of AI’s value and potential (Roe and Perkins, 2023; Brennen et al., 2018). When there is a lack of critical data and AI literacy skills, such coverage can affect organisations’ perceptions of AI, and how they see the parameters and framings of the broader debate.

  2. Techno-determinism

    Techno-determinism is the belief that technology (including AI) is an inevitable driving force for progress that has far-reaching and unstoppable consequences for society (Salsone et al., 2019). This belief positions emerging technologies as a powerful force, functioning independently from social considerations. Techno-determinism can be seen as the root cause of why sensational stories on AI receive so much attention. Some organisations that took part in this research displayed strong techno-deterministic attitudes towards AI.

A discussion group participant said:

“Well, the horse has already bolted... and I don’t think we want to be in that position with AI, although we almost kind of are, at the minute.”

A discussion group participant said:

“...if we looked at only the fears around AI, then nobody would use it and it would be put back in a box, but that box, I don’t think, can be closed.”

Engagement and non-engagement

More than a third (37%) of organisations stated that they are actively involved in wider discussions on AI. For these organisations, the survey revealed a variety of activities that they are engaged in, including internal organisational discussions on AI, and attending workshops and events. However, when exploring this engagement in more detail, very few organisations could articulate the nature of their input into these wider debates, and the impact they were having.

A senior leader at an environmental charity said:

“I’m on an international working group with... other NGOs to try and figure out how we also might steer the direction of AI for good, particularly from an environmental perspective, because that’s not a big enough conversation happening right now.”

Just over half of all survey respondents (59%) stated that they are not engaged in wider debates about AI. The top 3 reasons for not being involved are that they are not aware of any opportunities to get involved (63%), that they are not being asked to get involved (53%), and resource limitations (40%). While resource limitations are echoed amongst other findings, the fact the sector is not being invited into existing spaces perhaps alludes to power dynamics between social and environmental organisations and the tech industry and/or tech non-profit sector.

Power dynamics

It is interesting to unpick why most organisations surveyed are not actively involved in wider AI discourse, despite many using generative AI tools. Discussion groups highlighted the unequal power imbalances at play, with the non-profit sector portrayed as striving for resources and not having the capacity to engage in these debates.

A director of a grassroots organisation said:

“I don’t think the third sector will be cutting into the big players’ movements. Those [AI tools by big players] are going to be the models which are used the most. I think that there may be a small subset of systems which are created or developed with the use of people from lots of different communities, but predominantly, they’re going to be created by a small group of people with a lot of mined data.”

A director of a large organisation said:

“How do we, as non-profits, then get involved with key players to make those changes and be the people that shape AI? And I just don’t think we have enough capacity in our sector to be able to do that, at the minute.”

The sector’s struggles and sense of disempowerment can be compared with the power and influence of private technology companies. Many organisations discussed the role of private technology companies, emphasising their expectations for such entities to act responsibly.

A director of a grassroots organisation said:

“...it’s very easy to say AI doesn’t represent us, but if we’re not informing the machines, if we’re not empowering our own communities and the diverse sectors by informing the AI when it’s not correct and when it is good,... then we’re leaving it to the big companies who are mainly middle-aged white men.”

A senior leader of a large organisation said:

“You’re going to have people who are tech bods who are pushing this stuff forward. They just want to make money, and probably, that interest in the stuff that they can do, rather than how it affects our ability for social justice.”

However, there was little discussion on the mechanisms that the non-profit sector could wield to hold the tech industry to account, perhaps alluding to a gap in understanding regarding the sector’s collective power or available levers for change.

Perception of the sector’s role

Despite the positioning of the sector as being disempowered, some organisations clearly understood why non-profits should be involved in AI debates and the value their involvement could bring in representing their beneficiaries’ interests.

A senior leader of an LGBTQI+ rights organisation said:

“In terms of where charities can come in is we can represent the lived experiences and lived voices of our communities and I believe that needs to be taken into consideration when building this type of work... We inherently come from a place of principles. And we inherently represent different communities and different groups.”

Despite many organisations wanting the sector to be involved in AI discourse, some do not see themselves as a part of that engagement, instead designating other organisations to take on the challenge. For some, this looks like larger organisations playing a more proactive role in shaping conversations and representing the sector’s values.

A director of a grassroots organisation said:

“And the challenge is scaling up the good guys to be able to engage in this forum, arena. Ensuring that the technology gets power and doesn’t hold power... [lists larger national non-profits], I trust that they are steering that conversation in arenas that we don’t have access to.”

There is limited consensus on the sector’s role in engaging in wider AI discourse. Outside of engaging in internal discussions and workshops, as highlighted in the survey, there was little mention of co-ordinated efforts to challenge power or steer discussions. This comes as little surprise, given the wider constraints and resource challenges many organisations are facing.

Cross-sectoral collaborations

Although many organisations talked about the sector as a unified entity, a few highlighted the need to keep in mind the diverse and complex organisational contexts when talking about how AI can be used for public good. The goal is not to unite around a singular vision but to embrace diverse perspectives.

A senior leader of a national charity said:

“So, because one of the things I think about is, what a vision for good looks like for, say, a … smaller grassroots organisation is going to be different than [larger organisations]. We don’t necessarily need to come down on a singular vision, but it is about asking those questions, you know.”

Both organisations that are engaged as well as those that are not engaged agree on the need for more collaborations to support the sector to play a more prominent role in shaping the direction of AI. Coming together with organisations from different mission areas is seen as beneficial, especially between organisations with tech-focused missions and those working on social or environmental goals to build cross-sectoral strategies and connections.

A senior leader of an organisation working on immigrant and refugee rights said:

“I went to a round table discussion in autumn last year with a mix of groups, health worker advocacy groups, anti-arms groups, experts in digital rights, tech, so on. And that was one of the rare occasions that all these different people who were looking at different aspects of, AI surveillance, digitisation, were all in one room sharing those experiences, and that was incredibly useful.”

A senior leader of a national charity said:

“What may be [needed is] a systems map or what are the ways that I can intervene within the system?… Here’s the landscape and here’s points of intervention that civil society organisations might be able to have in that.”

The facilitation of cross-sectoral connections can be seen as a way to increase the sector’s engagement in wider discourse. When it comes to looking at what sectoral engagement can be like in practice, such connection opportunities can be vital in ensuring the diverse perspectives of the sector are represented in these broader debates.

Through understanding the impact of AI technologies on society and their beneficiaries more specifically, the sector can become a critical active stakeholder in addressing the socio-technicalities of AI systems, rather than a passive receiver of these technologies dominated by the private sector’s interests. It also addresses a need that the public both demand and desire inclusion in decision-making about AI, especially when related to topics which have high stakes in their lives such as public services, and psychological and financial support (Ada Lovelace Institute, 2023b).

Considerations

Our exploration of non-profits and grassroots organisations’ perspectives on the design and use of AI, including their engagement with wider AI discourse, has led to the following recommendations for consideration.

Non-profit and grassroots leadership

  • Evaluate whether there are ways of ensuring that generative AI tools and solutions are neither distracting from other solutions and innovation (technical or otherwise), nor not being advocated for due to a lack of confidence. Consider the impact on longer-term service delivery and staff development of using generative AI to replace human interactions.
  • Consider how to build confidence to advocate for and specify how AI should be used for served communities, albeit acknowledging that this may take time more valuably spent elsewhere for many organisations.

Funders and/or supporting bodies

  • Consider creating a project to build power and capability by mapping the system (including resources, points of intervention and sources of ongoing information) and uniting various organisations engaging with others on social and environmental issues and AI.
  • Consider building or funding spaces and forums for non-profits, especially the smallest and currently most excluded, to discuss topics relating to AI use. These convening spaces could set agendas and discussion topics outside of the influence of Big Tech and commercial motives.

Sector collaboration

  • Consider the feasibility of building a coalition to raise the voices of grassroots and non-profits representing important parts of civil society in the development of AI. Given the benefit of such input into AI design and decision-making, this should be something externally funded, to enable organisations to be properly resourced.
  • Consider whether such a coalition could also work to support organisations in supporting their users to manage any risks related to AI impact or use where they relate to their mission area.
  • Increase support and provision for building critical thinking skills and socio-technical AI literacy.

6. Key themes for wider discussion

Throughout all phases of the research, 2 key themes emerge, signalling underlying issues and insights to uncover. These themes begin to paint a picture of what may be important consequences of the sudden release of generative AI tools on the grassroots and non-profit sectors within the wider context of economic pressures. They need to be addressed both to provide and protect a critical layer of support for people and society, and to ensure that the sector’s voice is truly represented in how generative AI and AI in general are used and governed for public good.

Generative AI provides short gains over long-term solutions

Generative AI is providing many benefits to the majority of organisations involved in this research. Some of the micro-organisations were particularly excited, as these tools enable them to do things that they otherwise would not be able to. Some larger organisations are keen to use generative AI to tackle challenges related to resourcing and delivery. However, reflective discussion and some analysis of the trajectory of generative AI use indicate that:

  • Generative AI use is replacing investment in people and skills which might be necessary for the ongoing success of the organisation.
  • There is little consideration for how embedding AI tools could cause lock-ins and dependencies and expose organisations to likely price rises in the future.
  • Many tools and applications are still relatively untested in many environments and may mean more human time is spent checking or correcting errors, editing outputs, shelving failed trials, communicating with stakeholders about outputs and checking for compliance than is currently anticipated.
  • The effort required to keep up with new releases and upgraded models during a time of fast iterations of new tools may also take more organisational time than expected and potentially waste time in the long run.
  • As tools become more widely used, early gains may be counteracted by organisations still needing to compete and keep up – as generative AI tools do not lead to extra funding being available.
  • The use of generative AI tools distracts from finding other, more systemic solutions to the root causes of socio-economic or environmental challenges that many organisations are aiming to resolve.

This final point has particular implications on how longer-term, potentially more systemic interventions to ensure a healthy sector are missed if generative AI is seen as a lifeline or silver bullet (Haven and Boyd, 2020), and this can be seen through a lens of the wider trend towards techno-solutionism. This is the mindset that a technical fix can and should be found for every problem, regardless of whether it is appropriate and effective, whilst ignoring any additional problems caused by the often short-term or inadequate technical fix.

The impact of techno-solutionist narratives

There is a developing narrative that non-profit and grassroots organisations can or must be supported by generative AI to ‘keep up’. This is evident from the tone of participant discussions, and through the amount of marketing communications and media messaging the sector is currently receiving. Such marketing is laced with hyperbolic claims about how AI can revolutionise areas such as operations and fundraising. This is symptomatic of a wider excitement or ‘techno-optimism’ about AI technologies’ capacity to solve problems, which can also restrict other forms of innovation (UNDP, 2024).

Positioning generative AI as a solution to the kinds of economic and operational pressures identified in section 2, without addressing the underlying causes of such challenges (for example those related to lack of funding and increasing demands for service provision) may distract from the pursuit of other, potentially more transformational solutions. Other solutions which are being overshadowed or hidden by the ‘hyping’ of generative AI might relate to interventions such as advising on better public welfare services, programmes to address poverty and economic disparities, increased funding for services such as public mental health programmes, building infrastructure to better support the sector or developing workforce capabilities and skills.

Director, community building organisation

“...we see this [AI] as it’s not going to fix the problems because the only way to fix the problems is to pour large amounts of money into the charity sector. But since that’s absolutely not going to happen it’s the second-best thing we can do.”

Another approach to this frustration is seen from an organisation that actively rejects the idea that AI is a ‘solution for all’.

Senior leadership, grassroots organisation

“Organisations in the third sector... don’t have enough money to pay the people they need to do the work. I think that’s the issue. And the answer isn’t, ‘Well, we don’t have enough money. Let’s get the AI to do it’. I think the answer is that these organisations should be funded in a more equitable way.”

Role of funders in influencing how generative AI is adopted

Many organisations mentioned the role of funders, what they fund and what they expect from funding applications, and how these play a role in how organisations are adopting generative AI.

Senior leadership, local community organisation

“Funders are asking so much of us in applications that we almost can’t go back because everybody would have to stop using [generative AI], to go back to how it was before. So, we are in a bit of a cycle now of needing to use it to get the amount of work out that we were doing previously, to be competitive.”

This indicates a concern that responding to requirements set by funders in seeking to receive funding is leading to a ‘race to the bottom’ in which organisations increasingly compete with each other by using minimal labour and reducing standards. There is a concern that there is an expectation from funders that organisations should be using AI.

Senior manager, national charity

“Will funders start asking, ‘What are you doing with AI?’ just because they just want to tick that box.”

These concerns contribute to a picture of a disproportionate focus on the immediate adoption of new tools, without enough consideration from funders, supporting organisations and others of the longer-term implications of this focus.

Generative AI is increasing disparities between large and small organisations

The research findings indicate that there are already differences in the adoption of AI between different non-profit and grassroots organisations, and that size is one of the factors that influences usage. Smaller organisations are less likely to have policies or have considered the risks of using generative AI. Although those who have found generative AI as a way to cover the ground they need to and, are currently feeling empowered, the lack of governance or strategic approach does leave them more open to carrying risks.

The National Council for Voluntary Organisations (NCVO) analysis of voluntary organisations, broken down into 6 categories by size, shows that gaps have already grown over the last few years – with micro and small organisations (up to £100,000 income per year) declining in number each year (Tabassum, 2023). This is significant as 80% of voluntary sector organisations in the UK in 2021/2 are small or micro-organisations, representing the greatest proportion of organisations supporting specific communities, but down from 88% in 2000/01 (Tabassum, 2023).

From the research findings, smaller organisations’ use of generative AI tools is largely led by key individuals within the organisations having existing exposure to the technologies and existing levels of literacy. They tend to have personal exposure to technology as a result of either their educational background or exposure due to other roles held within larger organisations. This means that small and micro-organisations’ adoption of and experimentation with generative AI is driven by digital privilege. This could mean that organisations led and staffed by people without such privilege are less likely to be able to either use AI strategically or even consider how generative AI might be useful at all.

Larger organisations have been investing in more technology infrastructure (Ferrell-Schweppenstedde, 2023), can invest more in training and are likely to be more able to pay platform costs if tools that are currently free or low-cost start charging more. This is seen to be necessary due to the high costs of provision (Bignell, 2023) and therefore inevitable for continued access to such tools (Barr, 2024). 

Concerns about how these disparities are increased during times of crisis are expressed by a respondent who could only see this gap widening. They referenced how in the pandemic larger organisations used online strategies leading their income to "massively shoot up", as a director of a community building organisation said. He continued:

“... because they have very large, extremely capable digital teams, they had the finances and resources to invest in their digital capabilities and they started doing really clever things.”

7. Conclusion: Shaping the potential impact of generative AI

This research finds that the majority of non-profit and grassroots organisations with social missions which we surveyed are rapidly adopting and experimenting with generative AI, with high hopes for its benefits. Some are already finding significant gains in productivity. However, these successes are:

  • Not evenly distributed. Smaller organisations, particularly those which do not have existing digital privilege, or which have identified that their use of generative AI would not be in line with their or their communities’ values, are either less likely to be using it or are doing so without the assurance and governance of larger organisations.
  • Gained at a potential cost to organisational cohesion, beneficiary trust, internal values and in some cases the investment and development of solutions which might be more appropriate, affordable or transformational in the longer term.

In addition, non-profits and grassroots organisations largely feel unable or not invited to input into decisions related to the design or use of generative AI, and wider AI discourse in general. This is despite being uniquely well-placed to speak on behalf of their communities and beneficiaries at a time of significant change around the future of AI in the UK, currently driven by commercial interests and innovation, rather than social or environmental considerations.

We therefore consider that non-profits, particularly those at the grassroots, need to be better supported to ensure that the drive to use generative AI does not result in undermining their long-term capacity to deliver what are increasingly essential services. This support is required in several areas:

  • The development of and/or signposting to independent training and social AI literacy development, particularly concerning areas such as environmental impact, which are not well known.
  • The creation of independent spaces for discussion and inter-organisational learning between non-profits and grassroots organisations with both tech and non-tech missions.
  • Facilitating more pathways and mechanisms for feeding into how and what decisions are made about the design and use of AI in the UK.
  • Further support for existing initiatives addressing this challenge such as Community Campaigns on Data and the Digital Good Network.

Without increased support for the non-profit and grassroots sector, the combination of techno-solutionist narratives around the use of generative AI, alongside increased economic pressure looks likely to be detrimental to the sector’s capacity to serve the needs of the most vulnerable in society on an ongoing basis.

Method

Phase 1: Survey

The We and AI research team conducted an online survey between February to April 2024 to generate first insights into the sector’s use of generative AI and engagement with wider AI debates. The survey was hosted using Google Forms.

Sample

This research focused on non-profit and grassroots organisations that are working on issues around social and environmental missions. To capture the diversity of organisations in this space, we specifically recruited organisations to ensure there was representation across different geographic regions in the UK, different organisation mission areas and different sizes of organisations in terms of funding and staff. We only invited individuals with decision-making power in their organisations. Organisations were identified through our existing networks, as well as cold-contacted via email outreach. Organisations did not need to use generative AI tools in order to participate. Organisations that exclusively focused on digital technologies or were considered to be technology experts were not invited to participate as the research team felt that these organisations may be more involved in generative AI and societal debates on these technologies compared to the wider sector.

We identified a total of 144 eligible organisations. Of those 144, 115 were contacted according to our goals for an equitable sample size. Nineteen of these organisations were identified as ‘grassroots’ in the outreach process.

Out of 76 organisations invited to participate in the survey, 51 completed it. Organisations were offered £50 reimbursement for their time, either through donation to the organisation or payment into an organisational account. There was a spread of organisations in terms of income.

The majority of organisations that completed the survey had between 10 and 49 people at the organisation, including employees and volunteers, whilst only 2% had more than 500 individuals.

Data analysis

Before analysis, all data was anonymised. We analysed the data in April 2024 using descriptive statistics and summarised the findings from each question. For a full breakdown of survey questions, scales and responses, see the appendix.

The survey allowed us to gather a broad range of perspectives and include organisations that were unable to join the discussion groups or interviews. The findings also went on to shape the topic guides for the later stages of the research.

Phase 2: Discussion group

To explore and understand the sector’s engagement with generative AI and dig into the results of the survey we then hosted 3 discussion groups. These were also intended to foster broader dialogue between organisations in the sector. Based on the survey findings and a broader literature review we created a topic guide, which consisted of a list of open-ended questions to generate discussion during the groups. Two researchers facilitated each discussion group.

Sample

Participants who completed the survey were also invited to attend a discussion group. Discussion groups were held online via Zoom for an hour and a half. Organisations were offered £90 reimbursement for their time. A total of 16 participants were split into 3 discussion groups. All groups were conducted in March 2024.

Discussion group topic guide

Opening questions
  • What words do you associate with AI?
  • What words do you associate with generative AI?
  • What are some generative AI tools that you may have heard about or use personally?
Section 1: Generative AI use or non-use
  • If you or anyone in your organisation uses generative AI, can you briefly give an example of how it is used?
  • If you or your organisation do not use it, can you explain whether your organisation has considered using it?
  • What led your organisation to use generative AI, can you walk us through the decision-making process?
  • Could you explain if this was an active decision to not use generative AI?
Section 2: Impact of generative AI
  • What issues are your beneficiaries facing where generative AI is having – or could have – an impact?
  • To what extent do you think generative AI can support your organisation to achieve its mission?
Section 3: Broader debate
  • From your organisation’s point of view, what do you see as the impacts of generative AI on wider society?
  • As an organisation, are you involved in any of the wider debates about these impacts?
  • Does your organisation have any concerns around generative AI? If yes, can you explain these?

Data analysis

All discussion groups were recorded and then transcribed. The data was analysed thematically using Taguette (www.taguette.org). Two researchers independently coded 2 transcripts and over multiple discussions agreed on a code book. The remaining transcripts were then divided and coded independently, additional codes were discussed and added when needed.

Phase 3: Interview

To complete our qualitative analysis, we also conducted interviews. Interviews allowed us to explore concepts that did not surface during discussion groups and provided an opportunity to answer questions and clarify themes that had emerged during the research. A topic guide was developed based on findings from the survey and discussion groups, which consisted of semi-structured open-ended questions.

Sample

A total of 5 participants were invited to one-to-one interviews. Participants needed to have completed the survey and were intentionally recruited from organisations where we had missed particular representation in the discussion groups, for example, organisations that did not use generative AI tools. Among the 5 organisations interviewed:

  • One organisation did not use generative AI at all and was planning to draft a policy for non-use.
  • One organisation had no formal organisational policy for generative AI use but used it on a limited basis, and only for initial idea generation.
  • One organisation was using generative AI and helping other organisations use it as well.
  • Three organisations were working UK-wide, while 2 others were local organisations.

Interviews were conducted by one researcher and were held online via Zoom for 45 minutes. Organisations were offered £67.50 reimbursement for their time.

Interview topic guide

Section 1: Trust
  • Can you describe the level of trust you have in these tools?
  • What affects this trust?
  • What would make your organisation feel more confident about AI/generative AI?
  • How do you think that using generative AI will affect the trust of your beneficiaries?
Section 2: Disclosure
  • Can you describe your organisation’s approach to disclosing the use of generative AI?
Section 3: Public good
  • In general, do you think AI and generative AI can be used to improve public good?
  • How can the third sector collectively shape AI for the greater good?
  • What other actors can also play a role?
  • What are your thoughts on AI regulation?

Data analysis

All interviews were recorded and then transcribed. The data was analysed using Taguette, and 2 researchers coded the transcripts based on the code book previously developed. Additional codes were discussed and added when needed.

Narrative analysis

We brought together quantitative findings from the survey, as well as key themes identified in qualitative data through the discussion groups and interviews to form this report.

Appendix

Notes

  1. A non-profit organisation is one that is created and operated for charitable or socially beneficial purposes rather than to make a profit.
  2. Grassroots organisations are a subset of non-profit organisations. There is no agreed definition of grassroots organisation, and many grassroots organisations (including those in this report) might describe themselves slightly differently. For the purposes of this report, we distinguish grassroots organisations from other non-profits by the fact they need not have any legal structure and that their members are drawn from the areas they aim to serve. Originating from within the communities they serve, they often rely on local volunteers, empowering them to contribute directly to causes that affect their community.
  3. In the context of this report ‘the sector’ refers specifically to the non-profit and grassroots organisations with social and/or environmental missions who were engaged with in this research. Although some findings are likely to be relevant to the rest of the third sector, which includes all types of non-governmental and non-profit-making organisations, associations, charities, co-operatives, voluntary and community groups, we only speak for what we define as being non-profit organisations and/or grassroots organisations engaged in working for a fair and equal society. When we say ‘organisations’ this is also the group we refer to.

References

Ada Lovelace Institute (2023a) Post-summit civil society communique

Ada Lovelace Institute (2023b) What do the public think about AI?

Aibana, K. Kimmel, J. and Welch, S. (2017) Consuming differently, consuming sustainably: Behavioural insights for policymaking

Ali, A.E. Venkatraj, K. P. Morosoli, S. Naudts, L. Helberger, N. Cesar, P. (2024) Transparent AI disclosure obligations: Who, what, when, where, why, how

Amar, Z. and Ramset, N. (2023) The charity digital skills report

Barr, A. (2024) The generative AI future will not be free

BBC (2022) Nearly five million fewer donating, says charity sector

Benett, F. (2024) Unregulated AI could cause the next Horizon scandal

Big Brother Watch (2023) Big Brother Watch’s response to the Government’s consultation the ‘A pro-innovation approach to AI regulation’ White Paper

Bignell, F. (2023) Generative AI must break out of freemium based model to capitalise on supply chain opportunity

Blackbaud (2023) JustGiving to revolutionize fundraising with the integration of generative AI

Brennen, J. S. Howard, P. N. Nielsen, R. K. (2018) An industry-led debate: How UK media cover artificial intelligence

Browne, J. Drage, E. and McInerney, K. (2024) Tech workers’ perspectives on ethical issues in AI development: Foregrounding feminist approaches

CAST (2024a) CAST’s AI survey: What we’ve learned, what we’re doing — and how you can get involved

CAST (2024b) CAST’s AI survey

Centre for Data, Ethics and Innovation (CDEI) (2024) Public attitudes to data and AI: Tracker survey (Wave 3)

Chapman, C.M. Hornsey, M.J. Gillespie, N. (2021) To what extent Is trust a prerequisite for charitable giving? A systematic review and meta-analysis

Charities Aid Foundation (CAF) (2022a) 4.9m fewer people donating to charity

Charities Aid Foundation (CAF) (2022b) Charity landscape 2022

Charities Aid Foundation (CAF) (2022c) UK giving report 2022

Charity Digital Skills (2023) Report 2023

Clay, T. Davis, L. Neild, A. Noble, J. Thomas, W. (2024) State of the sector 2024: Ready for a reset

Corrêa, N.K. Galvão, C. Santos, J. W. Del Pino, C. Pinto, E. P. Barbosa, C. Massmann, D. Mambrini, R. Galvão, L. Terem, E. de Oliviera, N. (2023) Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance: Patterns

Coughlan, S. (2020) Why did the A-Level algorithm say no?

Department for Culture, Media and Sport (DCMS) (2023) Community life survey 2021/22: Volunteering and charitable giving

Dignum, V. (2019) Responsible artificial intelligence: How to develop and use AI in a responsible way

Duarte, T. (2024) Leaving AI explainers up to tech companies: What could go wrong?

Duarte, T. and Kherroubi Garcia, I. (2024) We must act on AI literacy to protect public power

Earwaker, R. and Johnson-Hunter, M. (2023) Unable to escape persistent hardship: JRF's cost of living tracker, summer 2023

European Parliament (2024) Artificial Intelligence Act: MEPs adopt landmark law

Ferrell-Schweppenstedde, D. (2023) Key challenges and opportunities facing the charity sector

Francis-Devine, B. and Buchanan, I. (2023) Skills and labour shortages

GovUK (2019) Regulation for the fourth industrial revolution

GovUK (2022) National AI strategy

GovUK (2023) A pro-innovation approach to AI regulation

GovUK (2024a) Business and tech heavyweights to boost productivity through AI

GovUK (2024b) Generative AI framework for HGM (HTML)

GovUK (2024c) Guidance to civil servants on use of generative AI

Harris, B. Fairey, L. Leach, C. Zalaquett, A.W. and Vizard, T. (2023) Public awareness, opinions and expectations about artificial intelligence: July to October 2023

Haven, J. and Boyd, D. (2020) Philanthropy’s techno-solutionism problem

Holstrom, J. (2022) From AI to digital transformation: The AI readiness framework

Hsu, T. and Thompson, S. (2023) Disinformation researchers raise alarms about AI chatbots

Hu, K. (2023) ChatGPT sets record for fastest-growing user base - analyst note

Ibison, Y. (2023) Artificial intelligence for public good

Kenley, A. and Larkham, J. (2023) Shifting out of reverse: An analysis of the VCSE Sector Barometer, in partnership with NTU VCSE Observatory

KPMG (2023) Productivity boost from generative AI could add £31 billion of GDP to the UK economy

Kremer, A. Luget, A. Mikkelsen, D. Soller, H. Strandell-Jansson, M. and Zingg, S. (2023) As gen AI advances, regulators – and risk functions – rush to keep pace

Lakhani, K. (2023) AI won’t replace humans — but humans with AI will replace humans without AI

Lalak, A. and Harrison-Byrne, T. (2019) Trust in charities, and why it matters

Larkham, L. and Mansoor, M. (2023) Running hot, burning out: An analysis of the VCSE Sector Barometer, in partnership with Nottingham Trent University National VCSE Data and Insights Observatory

Lynch, S. (2023)2023 State of AI in 14 charts

Maleki, N. Padmanabhan, B. and Dutta, K. (2024) AI hallucinations: A misnomer worth clarifying

McCarthy, J. Minsky, M.L. Rochester, N. Shannon, C.E. (1955) A proposal for the Dartmouth summer research project on artificial intelligence

McKay, C. (2023) 2023 wrapped: The year in AI ethics, safety, policy and laws

McKinsey (2023) The economic potential of generative AI: The next productivity frontier

Mittal, N. Perricos, C., Sterrett, L., Dutt, D. (2023) The generative AI dossier

Mucci, T. and Stryker, C. (2023) What is AI governance?

Munn, L. (2023) The uselessness of AI ethics

Newton (2023) The 2023 Newton charity investment survey

Nielsen, J. (2023) AI improves employee productivity by 66%

NCVO (2021) New survey shows 81% of charities changed how they use digital technology during the pandemic

No Tech For Tyrants (2022) Surveillance tech perpetuates police abuse of power

O'Brien, M. (2023) Chatbots sometimes make things up. Not everyone thinks AI's hallucination problem is fixable

Office for National Statistics (ONS) (2024) Employment in the UK: May 2024

OpenAI (2024) Introducing GPT-4o and more tools to ChatGPT free users

Roe, J. and Perkins, M. (2023) ‘What they’re not telling you about ChatGPT’: Exploring the discourse of AI in UK news media headlines

Russell, C. (2022) Scaling up skills: Developing education and training to help small businesses and the economy

Salesforce (2023) New AI usage data shows who’s using AI — and uncovers a population of ‘super-users’

Salsone, B. Stein, S.S. Parsons, K.G. Kent, T. Neilsen, K. Nitz, D.T. (2019) Technological determinisms

Skeet, A. and Guszcza, J. (2020) How businesses can create an ethical culture in the age of tech

Tabassum, N. (2023) UK civil society almanac 2023

Thirdsector (2024) What can generative AI do for the third sector?

The Open Letter Signatories (2023) The open letter

Trussel Trust (2024) End of year stats

UNESCO (2021) Recommendations on the ethics of artificial intelligence

UNDP (2024) Will techno-optimism make us complacent?

Young, R. and Goodall, C. (2021) Rebalancing the relationship: Final report

Acknowledgements

Research, investigation, analysis and initial draft: Gulsen Guler, Elizabeth (Lizzie) Remfry, Ismael Kherroubi Garcia, Nicholas Barrow and Tania Duarte from We and AI. Yasmin Ibison and Abby Jitendra from JRF. Jaki King from If Everyone Cares.

Writing – review and editing: JRF Insight and Policy, and Communication and Public Engagement teams.

We would like to thank all the organisations and volunteers who took part in this research project and trusted us to share their thoughts and realities.

Illustration by Emily Rand

About the authors

We and AI is a UK non-profit volunteer-led organisation working to increase the diversity of people able to contribute to decision-making about AI, through facilitating critical thinking and public AI literacy. It works with charities, educators, communicators and research institutes, and develops community workshops and accessible content about AI. It runs the global project Better Images of AI.

How to cite this report

If you are using this document in your own writing, our preferred citation is:

Guler, G., Remfry, E., Kherroubi Garcia, I., Barrow, N., Duarte, T. and Ibison, Y. (2024) Grassroots and non-profit perspectives on generative AI. York: Joseph Rowntree Foundation.

Animation of people going up or down escalators with binary code in the back ground

This report is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good