AI (Artificial Intelligence) Hub

How to use AI in your service delivery

In this article, Ioan from Charity Digital explores how charities can enhance services with AI by auditing their existing services, seeking qualitative feedback from their service users, and finding the right AI system to meet their needs. 

The charity sector has embraced artificial intelligence in various ways. But they’re struggling to use it to improve services. We provide some tips and tricks to help you get started 

More than 60% of charities use artificial intelligence (AI), according to the Skills Report. The growth stems from increased awareness, especially around simplistic uses, mimicking the landscape across other sectors. A Charity Job report highlights that, across the economy, two-fifths (40%) of workers directly use AI in some form, a number that rises to 43% in our sector. 

Charities use AI. But they use AI in predictable and limited ways, as shown in the Skills Report: the top three use cases are admin and project management, grant funding, and comms. User cases have remained broadly the same since 2022, since the arrival of ChatGPT. Our Reimagining Services report echoed the findings, showing that only 12% of charities use AI in service delivery. Nearly three-fifths (58%) said they do not use, nor do they intend to use, AI in service delivery.  

That’s a problem. AI provides huge opportunities in service delivery. In this article, we explore how charities can enhance services with AI by auditing their existing services, seeking qualitative feedback from their service users, and finding the right AI system to meet their needs. 

Audit services with quantitative data  

Let’s start at the start. You do not want to use AI for the sake of AI – that’s a recipe for disaster. The best AI solutions plug a real need or help to reach an established goal. Charities should start by identifying the strengths and weaknesses in their services, with the idea of understanding how AI could potentially help. Start by gaining a quantitative understanding of your service delivery.  

That means gathering data wherever it is available, exploring financial records, customer relationship management (CRM) systems, and analytics that might convey the efficacy of services. Anything that shows how frequently services are used, when they are used, why they are not used, who is using them, and so on. Combine internal data with any other useful data you might have from third-party platforms and conduct market research if it relates to your services.  

Data always tells a story. What does that story tell you? Where are the gaps, inefficiencies, and strengths in your services? People in your organisation are not always the best people to read the data. Internal people often make the data tell the story we want rather than the true story. If you know someone outside the charity, whether a paid evaluator or even a volunteer, then ask them to define the story the data tells. And trust their judgment, even if it runs counter to your instinct.  

Gain feedback from service users 

Charities need to ensure the quantitative is matched with the qualitative, and that means gaining feedback directly from service users. Research suggests fewer than 5% of non-profits have feedback systems to incorporate service user views into decisions. On top of that, the Skills report shows that fewer than two in five charities (37%) co-design any services with users.  

But if charities do not seek the views of beneficiaries, especially when using risky tech like AI, they risk falling into familiar paternalistic patterns, offering solutions to service users that are unwelcome or not fit-for-purpose. That’s why beneficiary feedback mechanisms (BFMs) are invaluable, helping charities to collect, manage, and respond to feedback from service users 

BFMs don’t have to be all-singing and all-dancing. They could take the form of a suggestion box, a hotline, a focus group, or a survey.  Perhaps the easiest route is through Microsoft or Google Forms, sending a short and succinct survey to your users to find out the main issues. Tech can improve the reach of a BFM but charities must consider the needs of service users who are not online. Indeed, the only essential requirement of a BFM is that all service users are able to give feedback equally, whether via SMS, email, or in-person, in respect of their varied needs.  

On top of BFMs, charities should explore more indirect ways of getting feedback. In fact, you can use AI to gain feedback. Parkinson’s UK, reacting to the challenges of the pandemic, used AI to track the topics and themes that communities living with Parkinson’s were discussing online. It revealed key concerns among service users, one of which was keeping fit. The charity started to produce fitness sessions via its YouTube and other digital channels, led by physiotherapists. They went out to get feedback, indirectly, and tailored a solution in response. 

Consider the various AI options available 

Using AI will not necessarily help your charity or your service users. Using the right AI will. Now that you have audited existing services, understood strengths and weakness, and gained insight into the challenges facing services users, you can use that knowledge to find the right AI solution.  

Consider the type of AI that might suit your services. If, for example, you want to create a chatbot that signposts resources to meet service user needs all day, every day, generative AI will be the best option, likely in the form of a chatbot. But if you want to help people with disabilities to navigate the online world by anticipating and resolving accessibility issues, perhaps agentic AI might prove the more useful option. 

Then turn to AI providers. Remember to cut through the AI hype and find a valuable solution that works for you. You will want to consider the risks of each platform: Is the provider transparent? How might you mitigate risks? How does the provider itself approach risk and ethics? The final question is very important because any AI platform that dismisses risk should be avoided. AI, particularly generative AI, is a risky tool and acknowledging risk is the first step to mitigation. 

Then you’ll want to test, test, test. Even after you’ve tested, even after you’re convinced by the tool, consider running a pilot on a small scale. That will allow you to note any issues at the beginning and provide visibility around potential risks. You can refine the tool at that stage, perhaps improving the service user experience, and improving the service for the wider release.  

In an ideal world, you’ll ask service users for feedback during the trial stage and make further refinements. Once happy with the service, you can launch with a roadmap for implementation – always planning for the future. And then, once completed, you’ll want to go straight back to the top of the article: start tracking the performance of your AI solution and inviting more feedback. 

To find out some use cases for using AI, check out our article.  

Find out more 

If you’d like to learn more about AI in charity service delivery, join us at the DSC AI Conference, where Laura Stanley, Senior Content Writer at Charity Digital, will break down how to use AI to enhance and improve services, exploring a variety of case studies and ethical considerations.