Skip to main content
Illustration depicting an adult and a child outside a greenhouse, while another person is gardening.
Report
AI for public good

Grassroots and non-profit perspectives on generative AI

Why and how do non-profit and grassroots organisations engage with generative AI tools, and the broader AI debate? 

Many organisations are excited by the opportunity to use generative AI tools, which they frame as free cost-cutting devices easily accessible in a highly uncertain financial environment. Some are using generative AI with enthusiasm, seeing these tools as able to lighten their workload and help deliver their services.

A local befriending organisation said:

“As you can imagine, there’s about 160 people we’ve got to call every week, and we’ve got a team of about 20 volunteers. So, we just use AI to generate prompts for chat, so that they’re not asking every single week, ‘Did you watch Coronation Street last night?’”

An interviewee said:

“We see AI as a massive opportunity, as our economic predictions have been bang on so far sadly and that we will not see any improvement in funding in the foreseeable future [...] AI is possibly the biggest opportunity we’ve seen in decades.”

Additionally, these tools are seen as something novel to explore, rather than having a clear rationale for use, ‘it’s there, rather than by design’ (discussion group participant); just under half (48%) of survey respondents are using these tools to experiment. While smaller organisations tend to use free and widely available tools, a couple of large organisations are using bespoke AI tools, either developed in-house or procured specifically based on their research and data analysis capacities.

Generative AI use is often driven by changes to default settings in existing software, as some generative AI tools are becoming embedded into existing platforms that organisations use. Embedding AI tools within existing platforms is not a new trend; such an approach aims to enhance users’ experiences of platforms or tools, with promises of saving time and sparking inspiration. This ‘there-by-design’ concept may enhance people’s curiosity to test out or experiment with these tools. Yet it also raises concerns about agency and informed decision-making.

Over time, it may become difficult for organisations to make critical decisions or opt out, as changing platforms might be time and resource intensive, made worse by the fear of losing existing work due to a lack of collaboration or interoperability between different software.

Generative AI and accessibility

Accessibility was a recurring theme in discussion groups; many organisations consider generative AI tools as helping to ‘level the playing field’ by removing barriers in administration and service delivery. The accessibility benefits of AI are outlined in various ways.

Organisational staff with accessibility needs are using generative AI tools to support them to work more effectively, in similar ways to a personal assistant.

A discussion group participant, director of a community organisation, with no income, said:

“We [the organisation] are very new, so I [as the founder] use AI every day. It’s my personal assistant. I also have disabilities, so I use AI to help me in regards to being more independent with running the organisation. So, for me, the benefit of using it is that I’m actually able to deliver for my members, I know I wouldn’t be able to deliver as well if I didn’t have the support of AI.”

Others can make their services more accessible to their beneficiaries through using generative AI tools. This includes generating closed captions, creating written summaries in an accessible language, using automated translation for non-English speakers, and generating alt-text for images posted on social media accounts. Some organisations have even extended their services by using generative AI tools. For example, an organisation that has made their AI chatbots available to use 24/7, means beneficiaries, many of whom work multiple jobs and cannot reach the organisation during working hours, can access information about their services at any time.

A discussion group participant, head of AI/digital for a large charity said:

“A lot of people who might be working two jobs or working one job aren’t free to get on the phone for hours and hang on hold or go to an office during working hours. So, if you have the ability to serve people digitally, you have the ability to serve people at the time that suits them, which may not be office hours.”

Yet across all accessibility benefits that participants cite generative AI as supporting, wider economic constraints drive the reasons for use. Generative AI tools are seen as solutions to save costs, providing services that would normally be financially unobtainable for many organisations, such as translation services, proofreading or night staff. Organisations make efficiencies by saving time when delivering services and not recruiting additional staff. Nevertheless, we can question whether such cost efficiencies and accessibility benefits are overshadowing ethical considerations organisations could consider when deciding whether to use generative AI, paying particular attention to whether the use of these tools aligns with their organisational mission.

Considerations

Our exploration of how and why non-profits and grassroots organisations are using generative AI has led to the following recommendations for consideration.

Non-profit and grassroots leadership

  • Evaluate how financial or budget resourcing and decision-making might change in the context of a growing reliance on generative AI tools, if prices are raised or access to such tools is restricted.
  • Consider the impact on longer-term service delivery and staff development of using generative AI to replace human interactions.

Funders and/or supporting bodies

  • Consider to what extent efforts to support non-profits to use generative AI to increase productivity might be better spent on finding solutions to the challenges which are driving the need for greater efficiencies.
  • Consider what support or mitigations are needed for organisations who are not using generative AI or are not able to use it as effectively as others.

Sector collaboration

  • Work together to model how the longer-term value of generative AI can be calculated, rather than focusing solely on instant time or cost savings. For example, if generative AI is being used to speed up both bid writing, and bid assessing, does this result in time savings if more bids need to be written and evaluated to compete for or award funding?

Generative AI governance

In order for generative AI to be used responsibly within any organisation, governance structures, in the form of ethical principles, frameworks, policies and guidelines, are needed to safeguard stakeholders and protect the organisation (Mucci and Stryker, 2023). In 2021 the UK signed up to UNESCO’s international framework to shape the development and use of AI technologies, based on a human rights approach to AI (UNESCO, 2021). 

There is a ‘pro-innovation approach’ (GovUK, 2023) to AI policy in the UK which means that, unlike the EU, the country favours relying on existing regulators to police the use of AI by organisations, instead of passing new laws (European Parliament, 2024). It is therefore largely left to individual organisations to determine how to use AI responsibly. Against this background, we explored how non-profit and grassroots organisations are approaching ethical challenges related to the use and governance of AI.

Generative AI policies

To understand how generative AI is being used in organisations, we looked at the ways it is managed at an organisational level. There is a large difference within the sector concerning which organisations have generative AI policies and guidelines in place, and which do not.

Most organisations (73%) surveyed do not have AI policies or guidelines in place. Organisations with £1 million+ in income are more likely to have AI policies or guidelines already in place, compared to organisations who have smaller incomes.

Organisations with larger annual incomes can spend more time and effort on creating AI policies or dedicate additional resources to commission external support with this task. This in turn can lead to more strategic and defined organisational use of generative AI. 

A discussion group participant, head of AI/digital at a national charity with £1 million+ income, and generative AI policies in place, said:

“We have a statement on how we use it. We have some risk assessments on what use cases we might actually have.”

Larger organisations also have the resources to create specialist roles dedicated to AI and digitalisation. Such roles are essential in creating direct lines of leadership and accountability between strategy and implementation. As AI cuts across, and can be integrated into, many functions of an organisation, dedicating responsibility and resources to a specific AI role promotes and allows for a wider understanding, integration and adoption of relevant tools across an organisation. 

A discussion group participant, head of AI/digital at a national charity with £1 million+ income, and generative AI policies in place, said:

“I was at an event on AI in recruitment, where we were looking at how AI is being used in the systems that we’ve got for the front-end of our recruitment and how that ties into our EDI practice. So, we’re trying to kind of embed it a bit in all the discussions, rather than have it separately, because it is definitely the scary new thing that people are either really excited about or really terrified about.” 

Smaller organisations, however, tend not to have policies or guidelines in place, engaging with generative AI in experimental ways without much restriction. Smaller organisations also highlight how they lack the capacity, resources and knowledge needed to create relevant AI policies and guidelines. 

A discussion group participant, director at a local organisation with £500,001–£1 million income, and no generative AI policies, said:

“We are a grassroots organisation, we have no plan as to how we’re going to use this [generative AI] at the moment. It’s just lending itself quite nicely, and we’ve got absolutely no thoughts at the moment as to how we need to control that.”

A discussion group participant, senior leader of a specialised team at a national charity with £1 million+ income, and generative AI policies in place, said:

“I kind of need more capacity in my team. So, having someone who focuses on AI, and maybe that’s somebody just in general, in our organisations, about innovation. Having someone to look outwardly, to work with us, because it’s another thing that’s added to my team. And it [AI] is fine, it’s exciting but we have a tiny bit of time to be able to work with it, which feels not enough.” 

The divide between large and small organisations is clear and can be cut across economic lines. Larger organisations give greater scrutiny to inform organisational AI use, whilst smaller organisations tend to use AI without any organisational frameworks, often personal judgement serving as a guide. 

Balancing values with perceived imperatives

Many non-profits face a dilemma in navigating the ethical dilemmas that surface when exploring generative AI use.

While there is a broad consensus among participants that the sector as a whole is ‘values and ethics driven, and any decision making should be flowing from this understanding’ (discussion group participant), many organisations are still using generative AI, despite being aware of the ethical concerns surrounding these tools.

The survey reveals that organisations share multiple concerns about generative AI. Interestingly, organisations that are already using generative AI still highlight an array of concerns about such tools from data privacy and security, and accuracy, to representation and bias in data, and automated decision-making.

Rather than being unaware of the potential risks, it appears that, for some, the economic and efficiency advantages of using such tools may outweigh ethical concerns.

A discussion group participant in an organisation working with immigrant and refugee communities, using generative AI daily, with no generative AI policies, said:

“I feel like I still don’t have that trust yet with AI, maybe because I don’t know enough about it or we haven’t been told enough, just how secure our information is when using it. We’re quite apprehensive, but we still do want to implement it because it does help quite a lot with our admin and speeding everything up, you know, and helping as many people as we can because it cuts down a lot of time.”

Wider economic pressures appear to be so strong for some that generative AI is framed as a ‘necessity’ to support organisational survival. However, we can also question whether organisations are broadly concerned about generative AI in general, or whether they are worried that they lack the skills and resources to mitigate potential risks arising from these tools, meaning that their current usage feels uninformed or irresponsible.

A discussion group participant in a grassroots organisation, using generative AI daily, with generative AI policies in place, said:

“AI replaces at least 3 people... ethically and morally, I don’t feel great about it, but due to the lack of support and funding for charities, the choices I have are: there’s no organisation or to use what’s there.”

Some organisations are using generative AI, despite recognising how such tools are in direct opposition to their organisational mission. For some, this means limiting the use of such tools to certain tasks and avoiding some altogether.

An interviewee in an organisation working with immigrant and refugee communities, with no generative AI policies, said:

“We’re a very values-driven organisation. So whenever we’re designing something, whether that’s a piece of content or those translations it has to kind of encompass a lot of nuance... if I was to type ‘human trafficking’ into Canva, using the AI-generated image function that it has, I know it will come up with something that’s quite voyeuristic, quite explicit, and that’s not the kind of stuff that we use in our work. In fact, we’re very anti that and campaign against the use of that kind of imagery.”

Others are using generative AI tools frequently, despite acknowledging wider systemic damage that contradicts their organisational purpose; such tension is clearly a difficult trade-off. An environmental charity voiced serious concerns about generative AI tools’ environmental damage, resulting in increased water and energy usage and carbon release. Yet, this organisation still uses generative AI in their work daily.

A discussion group participant in an environmental charity, using generative AI daily, with generative AI policies in place, said:

“We are really concerned about the significant increases in water and energy usage that’s associated with the data centres that host these AI tools that we use.”

Whilst just under a quarter of organisations surveyed highlighted environmental concerns (24%), when exploring this further, discussion groups thought that this result is due to many organisations not being aware of AI’s environmental footprint.

Five organisations surveyed stated that they are not using generative AI tools; the main reasons cited for this are that their service users would not like it and that they are unaware of the potential benefits.

Only one organisation, which was engaged in an interview, is actively not using any generative AI. They see an inherent contradiction between using such tools and their organisational values, expressing ethical concerns on how biases in data used to train such tools may perpetuate existing biases in society.

An interviewee in a grassroots organisation working on community building and social justice, with generative AI policies in place, said:

“We have a sort of policy against the use of AI. Broadly speaking, we view AI as a potentially quite negative aspect of current cultural development, and so we plan to sort of policise [create a policy for] the lack of use of it.”

Overall, most organisations were aware of the many ethical issues and concerns relating to generative AI use. Despite these, most still engaged with such tools. This contradiction between organisational actions and values may reinforce the argument that, for many, the instant benefits these tools provide are too good to ignore, particularly when leading organisations in times of economic precarity.

The intention-action gap

The tension between the use of AI and an organisation’s missions, values or objectives is not too distinct from what has been dubbed the ‘intention-action gap’ in for-profit sectors (Skeet and Guszcza, 2020). This gap occurs when an individual’s values, attitudes or intentions do not match their actions (Aibana et al., 2017). The gap between organisations’ desire to act ethically and their understanding of how to follow through on their good intentions has been a popular point of discussion in various circles, from sustainability to innovation (Aibana et al., 2017).

In the AI field, this gap is striking. While it is true that AI ethics has been gathering interest from governments, industry (McKay, 2023), academia (Lynch, 2023) and the media (Corrêa et al., 2023), in an attempt to ensure that AI tools are deployed safely and responsibly, when it comes to putting ethical practices into action, there seems to be uncertainty on how to follow through on these principles (Munn, 2023). This may be because AI is a fast-moving and emerging technology, in which the full capabilities are not fully realised, and societal harms not fully understood, measured or mitigated. It is interesting to explore the extent to which an intention-action gap regarding AI exists within non-profit and grassroots organisations.

Whilst there is an expressed sense of inevitability regarding AI developments, participants struggle to understand and reconcile the technical benefits of using generative AI tools with the moral dilemmas that also emerge.

A discussion group participant, head of AI/digital at a large charity with generative AI policies in place, said:

“I think there’s a lot of tensions... I do ethics assessments and algorithmic auditing, and I think what’s the real challenge is trying to weigh up different ethical concerns against each other. It’s actually really difficult because you could say, on the one hand, this might help more people. But if it helps more people but increases the outcome gap even slightly, is that okay? Like, how comfortable are we with that?”

A discussion group participant, senior manager at a charity with no generative AI policies, said:

“And this whole idea of arguing against something that’s coming and then it’s more a case of then having conversations on how best to use it, rather than trying to stop the inevitable... That’s a societal thing because you know it’s coming, but how do we best apply it? How do we best use it ethically and morally?”

Individuals found it difficult to have organisational conversations about AI ethics and guidelines when there are varying levels of digital and AI literacy present in their teams, or when team members are particularly fearful of new technologies.

A discussion group participant, senior leader at a charity with no generative AI policies, said:

“There’s a lot of debate going on at the moment, not meaningfully because, ultimately, people are sort of brainstorming. It doesn’t feel like a debate... that’s going to have a meaningful action at the end of it, other than just the general, trying to raise understanding of what the hell that [AI] means, and what does that mean for us as an organisation?”

There has also been some criticism that the terms ‘responsible AI’ and ‘trustworthy AI’ are being used as buzzwords. Such terms are ill- or undefined, with little consideration as to how they are to be implemented and measured (also known as ‘ethics washing’), yet their usage persists allowing individuals and organisations to be perceived as value-driven (Browne et al., 2024). Whilst this framing and subsequent critique is especially prominent in the technology sector (Browne et al., 2024), we see that it has also been carried over into non-profit and grassroots fields. Some organisations in this research voiced their concern that organisational AI guidelines are perceived as simply tick-box exercises without leading to meaningful action.

A discussion group participant, senior leader at a charity with no generative AI policies, said:

“There’s the whole ethical piece around that, that comes in. We don’t want it as a tick-box, we want it as something meaningful... [AI ethics] is being used as a buzzword, ultimately. And I think we’re going to hear it again and again over this year, just because of the political cycle.”

Even if organisations are concerned about the potential dangers of generative AI tools and want to act ethically, a lack of clarity around what it would mean in practice is a challenge. Using generative AI in line with organisational values or mission becomes even harder when the majority of organisations in this research do not have generative AI guidelines or policies in place. For some organisations, truly following their organisational values might mean stopping using generative AI tools altogether. However, this may prove challenging given the financial incentives to use such tools and the ease at which they are embedded across common digital platforms.

Important questions remain as to how non-profit and grassroots organisations can develop realistic approaches to engaging with AI ethics, given the lack of resources or internal expertise. Yet it is paramount that the sector critically engages with such questions, developing strategies that explore the extent to which these tools can be used in line with their organisational values or mission statements.

Generative AI and trust

The question of trust in the context of AI is contentious. What trust means to different AI stakeholders, and how this meaning is applied in different contexts, will vary significantly. For AI innovators, increased sales may be an indicator of people’s trust in their products or services. For users of AI tools, trusting AI may be deeply entwined with wider narratives on AI, data, and technology.

Trust is also a question of accuracy, how accurate or robust outputs from AI tools are. Generative AI tools are often described as being prone to ‘hallucinate’, a term describing when a generative AI tool produces outputs that are false or inaccurate (Maleki et al., 2024).

Yet the term ‘hallucinate’ is somewhat misleading as it implies that generative AI understands language or the prompts it is given, but sometimes is inaccurate; whereas any results from generative AI tools which do not reflect a source of external objective truth simply reflect the tool’s function to predict ‘the likelihood of different strings of word forms’ based on the texts on which the system has been trained (O’Brien, 2023). This prediction is a feature of such tools which cannot be eliminated; as such, users cannot trust that generative AI tools will always produce accurate outputs.

However, non-profit organisations need to be trusted by their service users, beneficiaries, donors, funders and the wider public (Lalak and Harrison-Byrne, 2019). Therefore, the use of generative AI by the sector must not undermine its reputation. In what follows, we delve into 3 trust-related themes present throughout the research:

  • trust in generative AI outputs
  • trust in how data is processed by generative AI tools
  • disclosing the use of AI tools.

Trust in generative AI outputs

70% of survey participants who are using generative AI said they ‘somewhat’ trust outputs of generative AI tools while none completely trust outputs, only 23% rarely trust these tools.

When exploring this further, we found that trust was heavily dependent on the context in which the tools were used. This reflects research findings from other organisations. The Public Attitudes to Data and AI survey of over 4,000 adults in the UK revealed that how trustworthy AI is considered depends on the organisational context within which such tools are deployed (DSIT, 2023).

For some common tasks, such as generating a funding bid, or drafting social media posts, many organisations are critical of the outputs produced and review the material. This is largely due to concerns around biases, accuracy or misinformation, as well as ensuring their organisational ‘ethos’ is prominent within the generated text.

A discussion group participant, senior leader of an environmental charity with generative AI policies in place, said:

“Misinformation is a big concern for us, in terms of the credibility of what we put out with generative AI tools.”

A discussion group participant, director of local organisation working on health equity with no generative AI policies, said:

“When you’re trying to apply for funding, you want the human, emotional aspect to try to come through a bit more in the applications... generative AI is going to be using, perhaps, fancier language which may not or may sway the person looking at your application.”

If an individual is confident in their knowledge and understanding of the context within which generative AI is being used, they can judge the accuracy and quality of the output, before deciding if they trust the tool.

An interviewee, senior leader at a national charity with generative AI policies in place, said:

“if I was to use AI as a partner in say quantum physics. This is a topic which I know very little about. I would not feel confident that I could trust the output that I could get out of the AI there because I would have more difficulty verifying it and I wouldn’t be able to... interpret what was coming back... I can only trust it where I’m supervising it to do tasks where I’m pretty confident what good looks like.”

Furthermore, fears of losing the trust of others due to not being seen as ‘authentic’ guide some organisations’ generative AI use.

A discussion group participant, director at a local community organisation with no generative AI policies, said:

“I would not actually want to use it for a website either, based on the level of trust that we actually want persons to have in us when they have a look at what it is that we are offering. We don’t necessarily want a situation where people look and all they can see is something that looks completely fake.”

When organisations spend time reviewing and amending the input and output of generative AI tools, additional time and expertise are needed to do so. This process is often referred to as ‘human-in-the-loop’. However, as this phrase seems to ascribe responsibility to the machine, it can perhaps better be thought of as what American computer scientist Ben Shneiderman refers to as ‘humans in the group; computers in the loop’.

Trust in how data is processed by generative AI tools

There was a lack of trust in generative AI tools deployed within sensitive contexts, such as working with survivors of domestic abuse or appealing an immigration decision. The level of scepticism for use in such cases is partially connected to issues of consent around personal information and privacy, especially when external parties are involved.

A discussion group participant, head of AI/digital at a national charity with generative AI policies in place, said:

“People are really, really worried about disclosing data; where that data is being shared, what kind of services outside of [the organisation] that that information might end up with. I think it’s a common theme with AI.”

A discussion group participant from a local organisation with no generative AI policies said:

“I always say, ‘Anonymise it. Don’t use any names when you’re using the AI to help you write something. And then go back, as a person, put in the names and the specific identifiers later’.”

There is also an assumed relationship between trust and charitable giving (Chapman et al., 2021). Some non-profit organisations may seek to be trustworthy by not using AI tools to protect their reputation and credibility, and ultimately attract more donations or funding. Again, we see the influence of financial drivers on AI use. However, decisions on the safe and appropriate contexts for generative AI use often depend on the knowledge, skills and understanding of whoever is using it, particularly within organisations with no formal policies or guidelines in place.

An interviewee, a senior leader at a local community organisation with no generative AI policies, said:

“So things like funding bids we don’t use AI... I mean, it would be very helpful, it would save a lot of time, but at the moment I don’t think AI yet can reflect [the authenticity of the organisation]. So, I think the danger of fully embracing AI is we would potentially lose some people out of fear or out of misrepresentation. I think that’s the biggest concern from a trustee point of view in our organisation.”

Whilst many organisations do not blindly trust generative AI tools, and draw some boundaries around use based on trust, it is unclear how accurate and appropriate such decisions are. For organisations without organisational AI policies, these choices are dependent on an individual’s level of digital privilege or expertise, personal judgements or relative interest in AI.

Disclosing the use of AI tools

When examining whether organisations disclose when they use generative AI tools, there is a mixed picture. Organisational policies relating to disclosing generative AI use (making clear when content or output has been artificially generated, manipulated or used for decision-making) were key discussion points, with diverse views and understandings on when and how to disclose generative AI use (Ali et al., 2024). Given that formal organisational policies on generative AI use are largely lacking, this also opens a grey area with regard to disclosure.

Of the organisations surveyed, 15% disclose their use of generative AI tools. Many see disclosure as a key part of their organisational values, building trust through transparency.

An interviewee at a local organisation with no generative AI policies said:

“We value authenticity so much that it [not disclosing] would damage our relationship with them [our beneficiaries] if they felt we were not true to our ethics. So I think there is a very important part of authenticity when thinking about using AI and being strategic and how we use it.”

Another 38% feel that disclosure is not relevant to their organisation’s use of AI tools. Here, individuals often reported that they do not disclose use when the AI tool is already embedded into existing software, or if the tool is used for internally facing activity.

A discussion group participant from a community building organisation, with no generative AI policies, said:

“Do people disclose that they’ve used a spell-checker in a document? Do they disclose that they’ve had an alarm to remind them when there’s a call?... how many tools do we use during our day, that we never even think twice about needing to disclose to somebody that we’re using that? So, I think it is a good question and I think it depends on how people are using it.”

For some organisations, the decision on whether to disclose use also depends on the perceived risk of the context in which the tools are used. For perceived lower-risk tasks, such as generating ideas, many deem disclosure unnecessary.

An interviewee from an organisation working with immigrant and refugee communities, with no generative AI policies, said:

“I don’t think there’s anything on our platform that has been, had such an influence from AI, that we would need to disclose it… it would kind of be used to inspire something but even we would then go and change most of it, so the AI influence on it would be, maybe 1%. So no, we don’t.”

Some organisations actively reject the need to disclose AI use altogether, including when producing written content. Many saw disclosure in these cases as undermining their efforts in overseeing and editing the content.

A discussion group participant from a local grassroots organisation with no generative AI policies said:

“No, I don’t disclose it because I work at it pretty hard. You know, it’s not a copy-and-paste job. It’s like having an assistant who’s doing a portion of the thinking. I’m still pretty much doing the thinking, I’m leading with the questions... I feel like it’s just helping me to sort out my thinking. Like, I’m working pretty hard, so I still think that’s just my work.”

Such divergent views on disclosure tend to be based on differences in organisational beneficiaries. If organisations believe that their beneficiaries are vulnerable and/or may be sceptical about technology, they tend not to disclose their use. Many focus on what AI enables them to do, rather than the fact that they use AI to do it.

A discussion group participant from a local organisation with no generative AI policies said:

“I have fears about [disclosure], mainly because... we currently have so many beneficiaries and members of the community that are really fearful of technological change. Many of our beneficiaries are older and potentially vulnerable and... I worry that they would see something like that and be frightened about the way that our charity is operating.”

Other organisations working on public campaigning, influencing or policy, and which may engage with a broader range of audiences, are more concerned about not disclosing their generative AI use. This is due to fears that non-disclosure would discredit their organisations. The output such organisations create using generative AI tends to be for larger audiences, making disclosure even more critical concerning being seen as trusted organisations.

An interviewee from a campaigning organisation with generative AI policies said:

“We probably never want to be opaque about our use of AI, particularly in image generation... I could see how it would be tempting in some contexts... for example, we want to present ourselves as being a more diverse organisation than we actually are and we’re going to generate AI people [for organisational flyer]... I feel like that would be skating us right over the edge of what would be acceptable to beneficiaries and supporters.”

An interviewee from a community building organisation with generative AI policies said:

“Public trust in charities is absolutely fundamental to what we do... we can behave in a way that does not damage that trust. So when you’re using things like AI imagery, [other organisation] have already run into a little bit of a trouble where they used [AI-generated images]. Everyone looked at it and assumed it was real and then got a bit hissy when they realised that it wasn’t real.”

There is a strong relationship between disclosure and trust, and while some organisations are aware that disclosure connects to their values, this is not the case for all. Some may not have considered the connections between disclosure, trust, transparency and authenticity; others may simply see non-disclosure as the right thing to do. It is important to clarify, however, that these organisations do not see non-disclosure as an act of deception. Rather, non-disclosure is presented as an act of care, to avoid panicking beneficiaries who may be fearful of technologies.

It is not the purpose of this report to evaluate organisations and identify ‘correct’ practices, and we acknowledge the diverse circumstances that non-profit and grassroots organisations operate within. Many often face tough decisions in using generative AI, and disclosure in other sectors is not common practice. Yet the question remains of what the future of disclosure might look like for trusted organisations, as AI becomes more embedded in existing tools.

Considerations

Our exploration of non-profits and grassroots organisations’ governance structures regarding using generative AI, including policies, disclosure and trust, has led to the following recommendations for consideration.

Non-profit and grassroots leadership

  • Consider having someone with responsibility for AI governance across all elements of administration and service delivery.
  • Prioritise identifying with communities how the use of generative AI tools might affect trust and service delivery, particularly in relation to transparency.

Funders and/or supporting bodies

  • Consider if there is a service which could be provided to support smaller organisations with relevant generative AI policies without adding too much operational burden.
  • Build on existing resources for the sector specifically as it relates to generative AI, to ensure that even smaller organisations are aware of the potential risks and benefits and have the guidance they need.

Sector collaboration

  • Continue to discuss how to navigate the trade-offs between values and imperatives in the ways generative AI is used, not only as purely internal decisions, but also as ones on which the sector as a whole can take positions. Consider developing frameworks which do not preclude supporting organisations with different priorities and missions, such as organisations who do not wish to use generative AI, and those who have found genuinely transformational uses for tools.
  • Consider what can be learned (particularly from environmental justice organisations) to develop approaches across the sector for knowing how to respond to the environmental impacts of AI.

Generative AI readiness

AI readiness has been described as an organisation’s ability to use AI in ways that add value to the organisation (Holstrom, 2022). It can include areas such as digital and data infrastructure, skills, organisational culture and mindset. There has been previous research conducted which surveyed charities’ use of AI and the capacity of organisations to adapt to and benefit from AI (Amar and Ramset, 2023). We wanted to dig into the perceptions of what is needed to adapt specifically to generative AI, and how organisations are managing this challenge. Specifically, we explore:

  • training on generative AI
  • leadership and frontline perspectives.

Training on generative AI

The pandemic catalysed changes in how non-profit and grassroots organisations used digital technology to deliver their services: 82% of organisations said they had to invest in new technology to adapt to the pandemic, which drove demand for digital skills required by staff and volunteers (NCVO, 2021). Generative AI is now also being used to help solve problems related to the increasing economic pressures that organisations are facing. A similar trend can therefore be predicted in which organisations look to upskill their workforce to use generative AI more effectively. This also reflects the narrative that there is a race to use AI tools to secure your job, that ‘AI won’t replace humans, but humans with AI will replace humans without AI’ (Lakhani, 2023). 

Previous research has also highlighted the lack of AI training; in CAST’s AI survey, 51% of respondents had not received any training or support around AI (CAST, 2024b). Through our engagement with charities, we uncovered 2 axes of discussion when it comes to training:

  • AI training content
  • AI training creators.

AI training content

The majority (69%) of organisations using generative AI tools have not received formal training. Despite this, most organisations expressed a need for such training. When asked about the content of such training, 2 perceived requirements emerged. Some organisations want operational training on how to use generative AI tools:

A discussion group participant, director at a disability justice organisation, said:

“I want a person talking me through the first steps of, ‘Okay, you sign up, and then these are the… this box does this for you... and this is the area where you do X, Y, Z’. It demystifies how scary and difficult it will be. So, I think, yeah, that practical 1–2–1 support, but also just having a general, like, okay, these are the things that it’s able to do on a broad umbrella, but then also examples.”

Many participants are conscious of the need for a more expansive understanding of AI as a socio-technical phenomenon, referring not just to technologies but also the complex social interrelations with which they are imbued. Some organisational leaders also expressed the need for training on the technical foundations of AI and data, to enable critical thinking about how to respond to tools.

A discussion group participant, head of AI/digital at a national charity, said:

“Now suddenly, everybody is extremely excited about AI and wants to skip all these other steps. But the risk there is that some of the foundational understanding in what computing is, what data analysis is, without having that in place, there’s a lack of interrogation that might be happening in the output that you’re getting from a generative AI tool, or a lack of awareness of where you might be encountering disinformation.”

A discussion group participant, head of AI/digital at a large organisation, said:

“The training courses that are out there are either really basic or really technical, and I actually need something a bit in-between that’s going to take our context into consideration.”

AI training creators

Faced with stretched budgets and an expressed need for training on AI, freely accessible training resources are relevant to our research participants. However, free resources could be problematic for 3 reasons:

  • Discoverability: there is a lack of awareness around ‘training available that’s low cost because charities don’t have a lot of money’ (senior leadership, local organisation).
  • Tailoring: The free resources are not personalised and, therefore, not necessarily effective within an organisation’s specific context.
  • Motivation: The reason behind many training resources is the promotion of AI tools to potential users and, as such, they are not as balanced as they need to be.

Unpacking this first point, many organisations remain unsure as to where to go for training. The quality of the training available is also highly variable. In CAST’s 2024 AI Survey, of those who received training on AI, only 6% felt that it was sufficient(CAST, 2024b).

A discussion group participant, director at a community building organisation, said:

“I might not just go to YouTube and look about it [learning about AI]. So, yeah, where’s that training going to come from and who are we going to trust and what are the stages of it?”

This concern shows a lack of confidence in auto-didactic (self-teaching) methods for learning new tools. It could be useful to explore the extent to which this response reflects an already overstretched workforce struggling to keep up with releases of new tools, or a feeling of disempowerment when it comes to grappling with AI technologies generally.

It also raises the question as to who and what defines a trusted training provider. Two opposing views emerge throughout the discussions. Some argue that the companies developing AI tools should also provide training.

A discussion group participant, director at a disability justice organisation, said:

“Do they [tech companies] have a representative that could come and do a session... how we can get improved access so that they’re giving direct support, whether that looks like having some kind of discount code or having those sessions where they can come and ask questions and support people to set it up.”

Others oppose the idea; they perceive technology companies as ‘the bad guys’ who may embed their companies’ profit motivations into training materials. Indeed, the UK lacks nationwide AI literacy initiatives, which means that there is a market gap for tech companies to provide free explainers and training resources as part of their content marketing strategies (Duarte, 2024). Research participants who oppose tech-company-led training place the responsibility on the non-profit sector to develop or procure resources for training, alongside providing spaces for sector-wide discussions to highlight shared challenges and case studies.

A discussion group participant, director at local organisation, said:

“The charity sector should be... producing something with some credentials. But it’s certainly not our bag to do that. We rely on the bad guys to kind of be coming up with stuff like that, and it’d be good if they didn’t look at it as an income-generating opportunity.”

Questions remain around to whom the sector can turn for unbiased AI training resources. Interestingly government agency support is not mentioned at all as a possibility. Whilst big tech firms may hold the technical knowledge, they may not be best placed, nor may their business models or marketing strategies align with what non-profits would find most useful.

Leadership and frontline perspectives

Larger organisations point out a disconnect between leadership and frontline staff’s perspectives on generative AI. Leadership, who mainly decide on AI policies and strategies, are described as being seen as ‘too cautious’ regarding such technology. In contrast, frontline or junior staff are pressed for resources and often use generative AI tools in response.

A discussion group participant, head of AI/digital at a large organisation, said:

“...wider awareness within our leadership teams, around risks of AI, are very strong. And I would say, at senior level, there is more trepidation than there is in some more, kind of, people on the ground using it who are definitely playing around with it, trying it, doing more things with it than at leadership level.”

Leadership’s relative caution around using generative AI may indicate that they place greater weight on ethical concerns or consider the potential risks of reputational damage and violation of legal frameworks (such as those around data privacy), the consequences of which would most likely fall on their shoulders.

In contrast, it is reported that more junior or frontline staff engage in more experimentation with generative AI tools. This unsanctioned or ad hoc generative AI use (also called ‘shadow use’) is often used to free up time from tasks that can be spent delivering in-person services. Shadow use of AI tools is not unique to the sector; it has been reported across the private and public sectors as becoming more of a concern (Salesforce, 2023; GovUK, 2024c). Almost a quarter (24%) of survey respondents report using generative AI tools that are not formally approved by management.

However, as most organisations (73%) do not have formal AI policies or guidelines, defining generative AI tools that have been ‘formally approved’ is difficult. For smaller organisations, ‘formal approval’ may consist of mentioning using a tool in a conversation, rather than receiving formal sign-off as part of a traditional policy process that many larger organisations may follow. Moreover, many generative AI tools are embedded in already existing platforms non-profits use, making it difficult to distinguish or be aware of when a tool is being used.

A discussion group participant, senior leader at a large charity, said:

“We use Microsoft Edge, so everything is kind of embedded in there. How aware people are, AI is embedded in most of that, I would say, depends on people’s interest in AI.”

A discussion group participant, head of AI/digital at a large charity, said:

“There’s not complete insight into how people are using it day-to-day. In the offices, people may be using it in quite extensive ways. We’re not 100% sure.”

Despite organisations understanding the risks of using generative AI tools, cases of shadow AI use reveal further questions about accountability. If harm were to occur from AI use within an organisation, who would be responsible? Organisations’ perspectives on accountability relating to generative AI use were not explicitly explored within the research. Yet there is a growing tension between the quick relief found from using generative AI tools to ease burdens, especially by frontline staff, and (a lack of) considerations on ethical implications and monitoring and governance structures.

Considerations

Our exploration of non-profits and grassroots organisations’ generative AI readiness, including training needs and differences in staff perspectives, has led to the following recommendations for consideration.

Non-profit and grassroots leadership

  • Consider how to learn together as an organisation and bridge gaps in attitudes. The risk of generative AI causing divides within organisations based on views of new tools needs to be addressed collaboratively, and ideally with input from served communities.
  • An example of an opportunity for this is the Data and AI Civil Society Network.

Funders and/or supporting bodies

  • See what training can be approved or created for non-profit organisations which is not purely designed by private sector and technology companies and is based on considering a socio-technical perspective AI.

Sector collaboration

  • Consider how those organisations with more experience and resources could be supported to play a role in supporting smaller or less confident organisations to learn about how to use and build generative AI solutions where appropriate, and signpost useful resources.

Perspectives on the design and use of AI

Non-profits and grassroots organisations are often particularly well placed to observe and understand the impacts of social and technological change on the communities they serve and represent. Such organisations often have great expertise in some of the areas affected by generative AI, and AI in general, including labour and employment, mental health, education, migration and justice. As such, the perspectives of non-profits and grassroots organisations are critical to guiding decisions about how AI can be designed and used for the public good. Therefore, we examine the extent to which organisations can contribute to the public discourse on AI development.

Involvement in wider AI discourse

We explored whether and how non-profits and grassroots organisations engage in broader AI discourse and debates. Two key contexts help to frame how such organisations may see the current debates surrounding AI, alongside how they conceptualise their role in wider discourse.

  1. Media coverage and public perception

    The media play a pivotal role in shaping narratives around AI, by framing what is discussed publicly. News articles often sensationalise AI with a focus on warnings to readers. They can also prioritise amplifying private companies’ assertions of AI’s value and potential (Roe and Perkins, 2023; Brennen et al., 2018). When there is a lack of critical data and AI literacy skills, such coverage can affect organisations’ perceptions of AI, and how they see the parameters and framings of the broader debate.

  2. Techno-determinism

    Techno-determinism is the belief that technology (including AI) is an inevitable driving force for progress that has far-reaching and unstoppable consequences for society (Salsone et al., 2019). This belief positions emerging technologies as a powerful force, functioning independently from social considerations. Techno-determinism can be seen as the root cause of why sensational stories on AI receive so much attention. Some organisations that took part in this research displayed strong techno-deterministic attitudes towards AI.

A discussion group participant said:

“Well, the horse has already bolted... and I don’t think we want to be in that position with AI, although we almost kind of are, at the minute.”

A discussion group participant said:

“...if we looked at only the fears around AI, then nobody would use it and it would be put back in a box, but that box, I don’t think, can be closed.”

Engagement and non-engagement

More than a third (37%) of organisations stated that they are actively involved in wider discussions on AI. For these organisations, the survey revealed a variety of activities that they are engaged in, including internal organisational discussions on AI, and attending workshops and events. However, when exploring this engagement in more detail, very few organisations could articulate the nature of their input into these wider debates, and the impact they were having.

A senior leader at an environmental charity said:

“I’m on an international working group with... other NGOs to try and figure out how we also might steer the direction of AI for good, particularly from an environmental perspective, because that’s not a big enough conversation happening right now.”

Just over half of all survey respondents (59%) stated that they are not engaged in wider debates about AI. The top 3 reasons for not being involved are that they are not aware of any opportunities to get involved (63%), that they are not being asked to get involved (53%), and resource limitations (40%). While resource limitations are echoed amongst other findings, the fact the sector is not being invited into existing spaces perhaps alludes to power dynamics between social and environmental organisations and the tech industry and/or tech non-profit sector.

Power dynamics

It is interesting to unpick why most organisations surveyed are not actively involved in wider AI discourse, despite many using generative AI tools. Discussion groups highlighted the unequal power imbalances at play, with the non-profit sector portrayed as striving for resources and not having the capacity to engage in these debates.

A director of a grassroots organisation said:

“I don’t think the third sector will be cutting into the big players’ movements. Those [AI tools by big players] are going to be the models which are used the most. I think that there may be a small subset of systems which are created or developed with the use of people from lots of different communities, but predominantly, they’re going to be created by a small group of people with a lot of mined data.”

A director of a large organisation said:

“How do we, as non-profits, then get involved with key players to make those changes and be the people that shape AI? And I just don’t think we have enough capacity in our sector to be able to do that, at the minute.”

The sector’s struggles and sense of disempowerment can be compared with the power and influence of private technology companies. Many organisations discussed the role of private technology companies, emphasising their expectations for such entities to act responsibly.

A director of a grassroots organisation said:

“...it’s very easy to say AI doesn’t represent us, but if we’re not informing the machines, if we’re not empowering our own communities and the diverse sectors by informing the AI when it’s not correct and when it is good,... then we’re leaving it to the big companies who are mainly middle-aged white men.”

A senior leader of a large organisation said:

“You’re going to have people who are tech bods who are pushing this stuff forward. They just want to make money, and probably, that interest in the stuff that they can do, rather than how it affects our ability for social justice.”

However, there was little discussion on the mechanisms that the non-profit sector could wield to hold the tech industry to account, perhaps alluding to a gap in understanding regarding the sector’s collective power or available levers for change.

Perception of the sector’s role

Despite the positioning of the sector as being disempowered, some organisations clearly understood why non-profits should be involved in AI debates and the value their involvement could bring in representing their beneficiaries’ interests.

A senior leader of an LGBTQI+ rights organisation said:

“In terms of where charities can come in is we can represent the lived experiences and lived voices of our communities and I believe that needs to be taken into consideration when building this type of work... We inherently come from a place of principles. And we inherently represent different communities and different groups.”

Despite many organisations wanting the sector to be involved in AI discourse, some do not see themselves as a part of that engagement, instead designating other organisations to take on the challenge. For some, this looks like larger organisations playing a more proactive role in shaping conversations and representing the sector’s values.

A director of a grassroots organisation said:

“And the challenge is scaling up the good guys to be able to engage in this forum, arena. Ensuring that the technology gets power and doesn’t hold power... [lists larger national non-profits], I trust that they are steering that conversation in arenas that we don’t have access to.”

There is limited consensus on the sector’s role in engaging in wider AI discourse. Outside of engaging in internal discussions and workshops, as highlighted in the survey, there was little mention of co-ordinated efforts to challenge power or steer discussions. This comes as little surprise, given the wider constraints and resource challenges many organisations are facing.

Cross-sectoral collaborations

Although many organisations talked about the sector as a unified entity, a few highlighted the need to keep in mind the diverse and complex organisational contexts when talking about how AI can be used for public good. The goal is not to unite around a singular vision but to embrace diverse perspectives.

A senior leader of a national charity said:

“So, because one of the things I think about is, what a vision for good looks like for, say, a … smaller grassroots organisation is going to be different than [larger organisations]. We don’t necessarily need to come down on a singular vision, but it is about asking those questions, you know.”

Both organisations that are engaged as well as those that are not engaged agree on the need for more collaborations to support the sector to play a more prominent role in shaping the direction of AI. Coming together with organisations from different mission areas is seen as beneficial, especially between organisations with tech-focused missions and those working on social or environmental goals to build cross-sectoral strategies and connections.

A senior leader of an organisation working on immigrant and refugee rights said:

“I went to a round table discussion in autumn last year with a mix of groups, health worker advocacy groups, anti-arms groups, experts in digital rights, tech, so on. And that was one of the rare occasions that all these different people who were looking at different aspects of, AI surveillance, digitisation, were all in one room sharing those experiences, and that was incredibly useful.”

A senior leader of a national charity said:

“What may be [needed is] a systems map or what are the ways that I can intervene within the system?… Here’s the landscape and here’s points of intervention that civil society organisations might be able to have in that.”

The facilitation of cross-sectoral connections can be seen as a way to increase the sector’s engagement in wider discourse. When it comes to looking at what sectoral engagement can be like in practice, such connection opportunities can be vital in ensuring the diverse perspectives of the sector are represented in these broader debates.

Through understanding the impact of AI technologies on society and their beneficiaries more specifically, the sector can become a critical active stakeholder in addressing the socio-technicalities of AI systems, rather than a passive receiver of these technologies dominated by the private sector’s interests. It also addresses a need that the public both demand and desire inclusion in decision-making about AI, especially when related to topics which have high stakes in their lives such as public services, and psychological and financial support (Ada Lovelace Institute, 2023b).

Considerations

Our exploration of non-profits and grassroots organisations’ perspectives on the design and use of AI, including their engagement with wider AI discourse, has led to the following recommendations for consideration.

Non-profit and grassroots leadership

  • Evaluate whether there are ways of ensuring that generative AI tools and solutions are neither distracting from other solutions and innovation (technical or otherwise), nor not being advocated for due to a lack of confidence. Consider the impact on longer-term service delivery and staff development of using generative AI to replace human interactions.
  • Consider how to build confidence to advocate for and specify how AI should be used for served communities, albeit acknowledging that this may take time more valuably spent elsewhere for many organisations.

Funders and/or supporting bodies

  • Consider creating a project to build power and capability by mapping the system (including resources, points of intervention and sources of ongoing information) and uniting various organisations engaging with others on social and environmental issues and AI.
  • Consider building or funding spaces and forums for non-profits, especially the smallest and currently most excluded, to discuss topics relating to AI use. These convening spaces could set agendas and discussion topics outside of the influence of Big Tech and commercial motives.

Sector collaboration

  • Consider the feasibility of building a coalition to raise the voices of grassroots and non-profits representing important parts of civil society in the development of AI. Given the benefit of such input into AI design and decision-making, this should be something externally funded, to enable organisations to be properly resourced.
  • Consider whether such a coalition could also work to support organisations in supporting their users to manage any risks related to AI impact or use where they relate to their mission area.
  • Increase support and provision for building critical thinking skills and socio-technical AI literacy.

Out of 76 organisations invited to participate in the survey, 51 completed it. Organisations were offered £50 reimbursement for their time, either through donation to the organisation or payment into an organisational account. There was a spread of organisations in terms of income.

The majority of organisations that completed the survey had between 10 and 49 people at the organisation, including employees and volunteers, whilst only 2% had more than 500 individuals.

Data analysis

Before analysis, all data was anonymised. We analysed the data in April 2024 using descriptive statistics and summarised the findings from each question. For a full breakdown of survey questions, scales and responses, see the appendix.

The survey allowed us to gather a broad range of perspectives and include organisations that were unable to join the discussion groups or interviews. The findings also went on to shape the topic guides for the later stages of the research.

Phase 2: Discussion group

To explore and understand the sector’s engagement with generative AI and dig into the results of the survey we then hosted 3 discussion groups. These were also intended to foster broader dialogue between organisations in the sector. Based on the survey findings and a broader literature review we created a topic guide, which consisted of a list of open-ended questions to generate discussion during the groups. Two researchers facilitated each discussion group.

Sample

Participants who completed the survey were also invited to attend a discussion group. Discussion groups were held online via Zoom for an hour and a half. Organisations were offered £90 reimbursement for their time. A total of 16 participants were split into 3 discussion groups. All groups were conducted in March 2024.

Discussion group topic guide

Opening questions
  • What words do you associate with AI?
  • What words do you associate with generative AI?
  • What are some generative AI tools that you may have heard about or use personally?
Section 1: Generative AI use or non-use
  • If you or anyone in your organisation uses generative AI, can you briefly give an example of how it is used?
  • If you or your organisation do not use it, can you explain whether your organisation has considered using it?
  • What led your organisation to use generative AI, can you walk us through the decision-making process?
  • Could you explain if this was an active decision to not use generative AI?
Section 2: Impact of generative AI
  • What issues are your beneficiaries facing where generative AI is having – or could have – an impact?
  • To what extent do you think generative AI can support your organisation to achieve its mission?
Section 3: Broader debate
  • From your organisation’s point of view, what do you see as the impacts of generative AI on wider society?
  • As an organisation, are you involved in any of the wider debates about these impacts?
  • Does your organisation have any concerns around generative AI? If yes, can you explain these?

Data analysis

All discussion groups were recorded and then transcribed. The data was analysed thematically using Taguette (www.taguette.org). Two researchers independently coded 2 transcripts and over multiple discussions agreed on a code book. The remaining transcripts were then divided and coded independently, additional codes were discussed and added when needed.

Phase 3: Interview

To complete our qualitative analysis, we also conducted interviews. Interviews allowed us to explore concepts that did not surface during discussion groups and provided an opportunity to answer questions and clarify themes that had emerged during the research. A topic guide was developed based on findings from the survey and discussion groups, which consisted of semi-structured open-ended questions.

Sample

A total of 5 participants were invited to one-to-one interviews. Participants needed to have completed the survey and were intentionally recruited from organisations where we had missed particular representation in the discussion groups, for example, organisations that did not use generative AI tools. Among the 5 organisations interviewed:

  • One organisation did not use generative AI at all and was planning to draft a policy for non-use.
  • One organisation had no formal organisational policy for generative AI use but used it on a limited basis, and only for initial idea generation.
  • One organisation was using generative AI and helping other organisations use it as well.
  • Three organisations were working UK-wide, while 2 others were local organisations.

Interviews were conducted by one researcher and were held online via Zoom for 45 minutes. Organisations were offered £67.50 reimbursement for their time.

Interview topic guide

Section 1: Trust
  • Can you describe the level of trust you have in these tools?
  • What affects this trust?
  • What would make your organisation feel more confident about AI/generative AI?
  • How do you think that using generative AI will affect the trust of your beneficiaries?
Section 2: Disclosure
  • Can you describe your organisation’s approach to disclosing the use of generative AI?
Section 3: Public good
  • In general, do you think AI and generative AI can be used to improve public good?
  • How can the third sector collectively shape AI for the greater good?
  • What other actors can also play a role?
  • What are your thoughts on AI regulation?

Data analysis

All interviews were recorded and then transcribed. The data was analysed using Taguette, and 2 researchers coded the transcripts based on the code book previously developed. Additional codes were discussed and added when needed.

Narrative analysis

We brought together quantitative findings from the survey, as well as key themes identified in qualitative data through the discussion groups and interviews to form this report.

Animation of people going up or down escalators with binary code in the back ground

This report is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good