Most charities are already using AI in some form. The question isn’t whether to engage with it; it’s how to do it well, safely, and in a way that actually serves your organisation and the people it exists to help.
Our job isn’t to sell you AI – it’s to help you make good, informed decisions about using it.
AI is no longer a future consideration for the charity sector. According to the Charity Digital Skills Report 2025, 76% of charities are now using AI tools in some aspect of their work — up from 61% the year before. That’s a significant shift in a single year.
But using AI and using it well are different things. Most charities are still at the experimenting stage — testing tools informally, finding what sticks, figuring it out as they go. That’s not a criticism. It’s where most organisations are. The problem is that experimenting without a clear sense of the risks, the governance requirements, or the strategic fit means charities are building on uncertain ground.
And the stakes in the charity sector are higher than in most. Your data includes vulnerable people. Your beneficiaries may be in crisis. Your reputation for trustworthiness is an income driver. Getting AI wrong isn’t just a technology problem — it’s a mission problem.
The questions most charities are actually asking
The sector data tells a consistent story. Charities aren’t primarily asking “which AI tool should we use?” They’re asking more fundamental things.
Is AI relevant to us, and if so where? 76% of charities are now using AI, up from 61% the previous year Fundraising Regulator — but relevance looks very different depending on your size, your mission, your data maturity, and your team’s capacity. What makes sense for a national health charity is not the same as what makes sense for a community CIC with two members of staff.
How do we use it responsibly? 60% of charities are worried about the implications of using AI — data privacy, service quality, bias, safeguarding. Those concerns are legitimate and, increasingly, they have regulatory weight. The Fundraising Regulator’s guidance accompanying the updated Code of Fundraising Practice — which came into force in November 2025 — requires charities to develop and agree an AI policy before using any AI tools to support fundraising, and recommends publishing that policy on their website to maintain trust and transparency with donors. Fundraising Regulator This isn’t just good practice. For any charity that fundraises — which is most of them — it’s now a compliance consideration.
Trustees are responsible and accountable for their charity’s use of AI, including when carried out by third-party fundraisers Third Sector — and the guidance warns against trustee knowledge of AI being concentrated among a minority of individuals. Third Sector Only a fifth of charities are currently reviewing their governance to give trustees proper oversight. That gap is where risk lives.
Where do we start? A third of charities say they don’t know how to get started — and that number rises sharply among smaller organisations. The Fundraising Regulator is clear that even charities that decide not to use AI for fundraising purposes may still need to consider how the use of AI by others could affect their fundraising activities. Fundraising Regulator Opting out isn’t the same as being unaffected.
What should our leaders and trustees understand? Over a third of CEOs and nearly half of all charity boards are rated poor at AI skills and confidence. And yet AI decisions — what tools to adopt, what policies to set, what risks to accept — now require leadership sign-off in a way they didn’t twelve months ago.
These are the conversations we’re having with charities. Not “here’s a tool you should buy” — but “here’s how to think about this for your organisation, given what you do, who you serve, and what the regulator now expects.”
What we bring to this
We’ve been working in digital long enough to have lived through several cycles of “this changes everything.” Some did. Some didn’t. What we’ve learned is that the organisations that navigate change well are the ones that stay grounded in purpose — using new technology to do what they already care about, rather than chasing it for its own sake.
That’s the lens we bring to AI work with charities.
We’re not aligned with any AI platform or tool vendor. We don’t have a product to sell. That means our starting point is always your organisation — what you’re trying to achieve, what you already have, what your team can realistically adopt, and what the risks look like in your specific context.
We’ve worked across a wide range of digital environments — from complex data integrations and content systems to fundraising platforms and beneficiary-facing services. AI touches all of those areas in different ways, and understanding how it intersects with the systems and processes you already have is as important as understanding the technology itself.
What this might look like in practice
Because every organisation is different, we don’t offer a standard AI package. What we do offer is a practical, honest engagement that starts with your situation and works outward from there.
That might mean helping you think through whether and how AI could support your fundraising, your communications, your content, your data analysis, or your service delivery — and what the implications are for each.
It might mean working with your team or your leadership to build enough shared understanding that you can make informed decisions, set sensible policies, and govern AI use in a way your trustees and funders can support.
It might mean a focused piece of work on a specific question — how to make your content more visible in an AI-shaped search landscape, for example, or how to assess the data privacy implications of a tool you’re already using.
Or it might mean sitting alongside you as you navigate a decision that’s already on the table, and helping you ask the right questions before you commit.
We’re deliberately not prescriptive about what this looks like before we’ve spoken to you — because the right answer depends entirely on where you are.
A note on honesty
There’s a lot of AI hype in the market right now. A lot of vendors promising transformation. A lot of consultancies packaging up the same generic frameworks with a charity logo on top.
We’d rather be useful than impressive. That means telling you when something isn’t worth your time or budget. It means being clear about what AI can and can’t do — including the risks the Fundraising Regulator has been explicit about: AI tools can generate content that is plausible yet inaccurate, replicate bias, or have unintended consequences. Fundraising Regulator It means acknowledging uncertainty rather than projecting false confidence.
The regulator puts it plainly: it is for each charitable fundraising institution to decide if using AI is right for them. Fundraising Regulator We take the same position. Our job isn’t to sell you AI — it’s to help you make a good decision about it.