The aim of the Cabinet Office’s £30m Community First Neighbourhood Matched Fund was to get small grants to the under-the-radar community projects working at the heart of the most deprived areas in the UK. Did it succeed?
Until this month, it would have been very hard to tell without extensive research where the grants went, how much was distributed, and to understand whether they ended up in the most deprived areas.
But now that the Cabinet Office has published the Community First Neighbourhood data to the 360Giving standard, it has taken me just a few hours to find out how much was distributed, where the grants went, what kinds of organisations received them and if they hit their target.
I’ve produced a short report which gives some basic stats on the grants made by the programme
It shows the largest grants were for £2,500, which was also the most frequently awarded amount – around one in five grants (3,278) was for this amount, with a further 1,171 grants made for £1,000. The average amount awarded was £1,500. Interestingly it tells us something about the life cycle of grantmaking. The grants given by the fund were generally very small: the average size of grant awarded grew slightly over the course of the fund, with a mean average of £1,274 in 2012 to £1,514 in 2014.
And by using Local Authority codes, I was able to link to data on the Index of Multiple Deprivation (sourced from the brilliant opendatacommunities site). This showed that the grants were targeted at the most deprived areas – around 35% of grants went to organisations working in the most deprived 10% of local authorities.
Then by matching to registered charity data, I brought in data on the charities that have received funding. It showed that nearly three-quarters of charities funded say they work with children and young people, and that economic and community development is another key area for them.
The value of the 360Giving Open Data Standard is that it allows for data comparison. How did the Community First Neighbourhood Matched Fund compare with the similar, though much bigger, Big Lottery Fund ‘Awards for All’ programme? Analysis showed there was a greater focus on deprived areas in Community First than Awards for All, which funded a higher number of registered charities. It was interesting to see where funding overlapped – around 500 organisations (1% of total recipients) received funding from both Community First and Awards for All.
The 360Giving Standard is built in a way that encourages making these links. By using common identifiers – “GSS” codes for local authorities and other areas, charity numbers for registered charities, company numbers – you remove the ambiguity that comes from just putting the name of the area or organisation, and it makes linking easy.
The Community First data wasn’t perfect in this regard. There were no charity numbers so I had to add them by matching with the charity register on names – however it’s a great step forward, particularly getting data from a central government grant fund.
This pretty quick analysis shows that data published to the 360Giving Open Data Standard can quickly be used to give real insight into a grant programme’s performance against its aims. It’s worth also taking a look at the official evaluation of the programme.
As well as the report, I’ve made the data analysis available on github as a jupyter notebook. This format allows you to see the python code I’m running, some commentary on what it’s doing, and the output of the code, such as charts, tables, etc.
It’s a great resource that will help with future research efforts – for example by building on previous work like this NCVO report on below the radar organisations.