Blog

Getting started with AI as a non-profit

Nov, 29th 2025

“We know we should be doing something with AI… but we have no idea where to start.”
I’ve heard this sentence quite a few times now, from executive directors, donor relation managers and other stakeholders in non-profit that have maybe seen the hype in the for-profit world and wonder how AI can actually help them do more with less. Most have already dipped a toe in; they’re messing around with ChatGPT or Claude or Grok, honestly it’s kind of fun at first, but they’re stuck. They don’t know how to turn playful prompts into something that actually saves the team hours every week.
The reality that most non-profits operate in is one where they’re trying to achieve as much as possible with a shoestring budget, and investing both time and money into new technology can feel reckless when they’re already busy investing into their core mission.Now, I’m of course biased, but my answer to the question of whether a non-profit can benefit from AI solutions is of course, yes, absolutely! And the good news is that getting value from AI tools doesn’t require a PhD. Next I’m going to outline my recommendations for organizations that are looking to start this journey and do more with less, using AI.

First, pick your AI champion

A good place to start is to actually form a simple AI strategy; what you intend to do with AI, how you intend to use it and what you hope to achieve with it. The question is how do you do that if you don’t have an AI expert on staff and are starting from zero?My advice is pretty simple: appoint an internal champion. Find the person on your team who is naturally enthusiastic about trying new things and maybe already an eager learner when it comes to technology. This person could be found anywhere in your organization. They don’t necessarily need to be part of your leadership team or already working with areas like operations where change management usually happens. Make this person your “AI lead” and dedicate time for them every week to read, learn and most importantly experiment and then share their learnings with the rest of the staff. They should also use this dedicated time to capture the AI strategy and then revisit it quarterly to adapt.

Don’t fall in love with the hammer…

….might seem like a strange headline, but I’m sure you know the saying that “for someone with only a hammer, every problem looks like a nail”.I have spent the last many years building and helping others build startups and one of the first things I tell budding entrepreneurs is: start with the problem, not the solution. On the startup or technology integration graveyard you will find a lot of stories of people who started with the solution and then looked for a problem to fit it. I’ve done it myself and learned the lesson.In the case of AI this often takes place when you have organizations that say “we know we should be doing something with AI, but we don’t know where to start”. They start with AI, then look for places to integrate it and most often they will fail with this approach. Instead, ask your appointed AI lead to start talking to people in the organization and identify where time is spent on repetitive, manual tasks every week. You can probably already picture a few in your mind now, but common areas include: grant writing, donor management and communication, content creation and data analysis. These are all areas where AI can be applied to solve tasks that take staff hours to do every week. Examples of these could be:
* Donor communication: personalizing donor updates instead of sending a generic blast
* Grant writing: drafting first version of a grant proposal based on previous successful grants
* Data analysis: automating data collation from different systems and doing analysis on it
* Content repurposing: turning your annual report PDF into bitesized tweets and a newsletter summary
Only once you have figured out your really “expensive” problems are, in terms of time and energy, should the AI Lead move on to look at ways and tools that AI can solve them. Not the other way around.

Solve one thing really well, then repeat

There are new AI solutions hitting the market every single week. In fact, being in the AI space, half the battle is staying on top of new research, models, new platforms and new tools. It’s important to do some research on which tools might be right for your organization and your particular challenges.Pick one problem from your list and find the tool that solves it best. Most will offer free trials so you can test around what really works. You might ask whether adopting a range of new tools is really necessary and whether this is cost efficient at all? No, not necessarily. While it pays off to spend time researching tools and ensuring that you solve one problem at a time, you don’t need to replace your entire set of technology tools. You might look at AI solutions for fundraising, donor management or grant writing. A lot of ground can be won by integrating one AI automation solution that connects your existing tools and using that to solve the problem you decided to tackle. One good win pays for itself quickly, and the time you get back lets you tackle the next expensive problem.One thing to keep in mind with AI is that it lives on data, and if your organization is like most others your data is likely siloed in different places and in different formats and needs consolidating before it can be properly used for an AI tool. Cleaning and moving that data is usually the first thing an automation tool can fix for you. Using your proprietary data with AI tools and models raises one final and very important point.

AI, privacy and security

You might already know that AI models like ChatGPT are trained on billions of data points, and while AI providers have guardrails on what information you can get out of them, you will need to ensure that your AI strategy is safe, first and foremost.This means that anyone in your organization needs to understand what they can and can’t do with your proprietary data. The number one rule is to always safeguard PII (personally identifiable information) and not use it with tools like ChatGPT. Not only because you don’t want the tech companies to have your data, but also because you’re breaking federal laws.I won’t use this post to go deep into what PII is, but I will say that there are AI tools that are absolutely safe to use, but you need to know how your data is used when it comes to AI, just is the case with any data that your organization uses or stores on internet-enabled services. As your organization starts experimenting with AI, a key part of this is to develop an “AI Usage Policy” to make it very clear to your staff what they can and can’t do. Responsible data hygiene isn’t a burden, it’s the reason why donors will still trust you tomorrow.

Your next move

AI isn’t a magic wand, but when you start with real problems, a lightweight plan, and basic safeguards, it becomes the best force multiplier your mission has ever had. At Daneeca we help non-profits build exactly this: lightweight strategy, safe automations, real results. If you need help getting started with your AI journey please reach out and book a free consultation. No sales pitch, just honest advice.

© Daneeca Growth Consulting 2025. All rights reserved.