Our Fundraising Fair speaker Charlie Hulme reveals the mistakes made when trying to understand your supporters:
No matter how much time, energy and money we invest, our new supporter journey always ends up the same place the old one did. Not surprising when you think this year’s target is to hit last year’s number.
But if that was having little impact when things were ‘easy’, how much less will it have now?
There’s a growing recognition our sector has no meaningful insight on why people support us. We often hear people support “because they were asked” and/or “because it feels good”. Which is as insightful as saying water is wet!
‘Why’ doesn’t exist on anyone’s CRM. All we ever measured and managed was who gave (socio-demographics), when (recency/frequency), how much (value) and how (channel). But none of these tell us why someone supports. Which makes all our well-intentioned jargon about ‘journeys’, ‘donor-centric’, ‘engagement’ etc. sound at best naive, at worst presumptuous.
So, now the sectors on a journey to uncover the elusive ‘why’; looking for new/better ways to profile and segment supporters. But (with a handful of exceptions we’ll share in our session) no one’s been successful.
The mindset is 100% correct, but the methodology isn’t. So, it’s worth looking at where so much profiling and segmentation goes wrong.
The first mis-step is usually some kind of supporter survey/focus group. Yet behavioural science has repeatedly shown people can’t reliably answer these questions. Not because they purposefully lie, but because they fail to predict their future behaviour. Or because the nature of the question is such that it can’t be answered in a reliable way.
There is always some set of variables used to create segments. Either winging it or a statistical grouping method (the latter leads to a false sense of confidence). The question that really matters is what variables get used. Why?
Whatever variables are used to create segments you will, by definition, see differences on those exact same variables when profiling. This is circular and tautological. But it is important because profiling of the segments, on the very variables used to create the segments, is what is used to feel confident about the segments being different.
If you have more data on your CRM, and you add it to your segments, you will see even more differences. The more data you use to describe, the more different they’ll look. These differences are as inevitable as they are irrelevant.
When giving behaviour variables are used, in whole or part, to create segments you’ve reduced your segmentation into a selection tool (in other words it describes what they did, not why they did/will do it). Why?
Because giving behaviour is being used to explain giving behaviour! It is all effect, zero cause.
This is no different than regular RFV selection since the behaviour part drowns out any other variables. But those other variables (e.g. attitudes about your cause and wanting to help those in need) will make it look like it is something different, something tied to motivation, etc. It isn’t.
The reason behaviour variables get thrown into the mix is that attitudinal segmentation, more often than not, shows little bearing on behaviours. Why?
Because there was no theoretical basis for the attitudes used to create the segments. It just intuitively feels right to ask a series of global questions on how people feel (e.g. about your cause and helping people, etc.) and then grouping based on the responses.
When you do this you merely wind up finding 3 or 4 segments with varying degrees of feeling on the battery of statements. Which is neither interesting or useful.
When the descriptive profiling is done on these attitude-only segments there is often little behaviour difference. So some of those behaviours are thrown into the set of variables used to create the segments. And there you have it; instant ‘differences’ on behaviours you care about across your segmentation. But in reality, it’s just a weak form of RFV selection.
Many use demographics along with attitudes to ensure differences in the segments (to fit some preconceived, false notion that demographics matter). Adding age to the attitude variables will, by definition, create segments that think a bit differently (on random but alluring irrelevant info) and that are different average ages.
This approach always yields an impossible to deliver on number of ‘strategic’ segments, putting the ‘less’ in useless.
Again, the mindset of re-segmenting based on a deeper understanding of supporters is 100% right. But it’s not hard to see this popular methodology is wrong.