The Charity Commission’s research into public trust remains a quagmire

Andrew Purkis, DSC's Policy Trustee, critiques the Charity Commission's latest public trust research

The original intentions behind the Charity Commission’s research into public trust and confidence were understandable. Politicians wanting to cull quangos needed to hear how important the public thought the work of the Commission was. And if one of its duties specified by Parliament was to improve public trust and confidence, surely it needed a baseline and research to demonstrate how it was doing and what the drivers were? But the actual result has proved unconvincing, year after year. Here are some of the difficulties.

Is it the patient or the thermometer?

The latest in the series, published in July, shows that public trust in charities has risen a bit. But since the researchers (now called Yonder, formerly Populus) discovered that almost every other institution or profession also registered a rise in trust, in some cases more than our sector’s, it is very hard to assign much significance to any of it.

Or look at the public’s view on whether charities play an ‘essential’ or ‘very important’ role. That is supposed to have gone down 5 percentage points from 2008 to 2010, then up 9 per cent by 2012, then falling until 2020 and up 5 per cent again this year. Really? What can explain this yo-yo pattern on such a broad question – could it be the thermometer again?

Not comparing like with like

Trust doesn’t exist in isolation from expectations, as Onora O’Neill has repeatedly shown. What I expect from doctors is reliable medical opinion and a cure. I judge them on that and they come out well. That doesn’t mean I trust them to use charitable donations well, or run the country or tell me what’s going on in the world beyond medicine. And anyone who wants something from me arouses potential distrust – a bank with its charges, a politician wanting my vote, a charity wanting my money, whereas doctors generally in a state health service don’t want anything from me, nor (in general) policemen. On the other hand, my trust in my MP or local councillor might be influenced by whether he or she is someone I voted for or from another political party that I dislike. So we are not comparing like with like.

The public doesn’t know what is and is not a charity

There is a deeper problem. At least the public know what MPs, newspapers, social workers, policemen and doctors are. But they don’t know what is a charity, and what isn’t. That is one of the clearest lessons from Charity Commission research over the years, though it doesn’t suit the Commission to admit it. In previous years, but not this year, Populus gave an explicit health warning that the severe limitations of public understanding of what charities are must always be borne in mind when interpreting the results.

What this means is that the researchers are having an earnest conversation about charities with a group who in general have in mind a narrow segment of about nine national household names and have no idea about the vastness and variety of the sector they think they are talking about.

Commenting on the research, Helen Stephenson, the Charity Commission Chief Executive, concludes that “More than ever, people need evidence that charities are not ends in themselves, but vehicles for making the world a better place, both through what they achieve, and the values they live along the way”. We hardly need the Aunt Sally that charities might be ends in themselves when every charity must be for a charitable purpose and for the public benefit. Beyond that it simply doesn’t follow from the research that the provision of more or better evidence by charities in general will make any difference whatever to future research results based on people’s attitudes to about 9 household name charities.

A Monty Python-esque situation

If the fact that the public doesn’t really understand what is and isn’t a charity isn’t a big enough hurdle to robust insights, we then find ourselves in a Monty Python-esque situation where our regulator is asking the public for their views on the Charity Commission itself, when just on half of the sample have never heard of it at all and 81 per cent say they don’t know it even “fairly” well.

Further, there is a lack of comparative perspective on public expectations in other sectors. The report and the Commission itself keep insisting that public expectations of charities are “high”, but these are not compared with those for other kinds of organisation or professions, so in fact the research does not establish whether the expectations of charities are high or not-so-high compared with those of other sectors.

It’s also well known that what people say and what they actually do sometimes do not tally, yet this research and its conclusions are based solely on what people say. Take the question of whether diminished trust as recorded in this kind of research actually affects the level of donations from the public. The Commission has often stated or implied that it does. Yet even in the worst year for trust in charities, 2016, only 13 per cent of the total sample said this had led them to reduce their donations. But is it true? Were they donating anything anyway? We don’t know.

Indeed, it turns out that a recent major study by the University of Queensland, as discussed by Ian MacQuillan in Third Sector on 3 August, analysing 42 different studies of this topic, has concluded that rises and falls in overall trust account for only 5 per cent of the variation in giving. That study also finds that a scandal in an individual charity does not generally harm trust in charities as a whole, contradicting another finding of the Charity Commission and Yonder, and that charity effectiveness is not a particularly strong determinant of giving either. This meta-study highlights the precarious nature of this kind of research.

Imposing a narrative on hard-to-interpret research

Given research with all those major limitations, what happens year after year is that the Commission and its researchers are tempted into selective questioning and interpretation to support the leadership’s preferred narrative. Some loaded questions which have raised eyebrows in the past have been repeated this year. For example, one of these shockers asks the sample to choose between “If you are a registered charity and enjoy the benefits of that status, you have a collective responsibility to uphold the reputation of charity (sic) more generally” (sounds fair enough, surely?) versus “If you are a registered charity, your only responsibility is to uphold the reputation of your own charity” (how narrow and self-centred that sounds!). Unsurprisingly, a chunky majority vote against the latter straw man.

Enough is enough

Despite these ambiguities and flaws, the Commission continues to insist that the research reveals “a set of shared expectations about how charities should behave”, which will help them increase public trust in future years, and goes on to enumerate a set of ‘public expectations’ in conclusion.

The research is entirely uncritical of the beliefs and opinions it is reporting. It is as if, as Baroness Stowell came very near to expressing, the public must always be right. You must try to interpret what they say into something sensible and useful, even if it is ill-informed. This vast gap between the often poorly informed views of ‘the public’ and universal, useful lessons for such a diverse sector as charities, is the quagmire into which the Commission and its researchers fall together year after year.

This research series is not accomplishing the good intentions with which it began. It has proved too difficult. Perhaps it is time to climb out of the quagmire. Enough is enough.

A more detailed version of this piece was published in Civil Society on 19 August 2021.