Two major reports on the effectiveness of the government’s central education policy – turning schools into academies, preferably in chains – have been published in the past two weeks. But do they get to the truth of the policy? Not remotely, I think. I say that even though the reports serve a useful public interest function in holding ministers to account.
The central problem with these reports is that they see the success or not of the academies scheme entirely through the lens of the test and exam results either of individual institutions, or of institutions grouped together in chains or, more loosely, in local authorities. Although this approach purports to offer an ‘objective’ insight into the quality of academies, and by extension the success of the policy itself, in fact it has some serious problems.
The methodology
The two studies I highlight here are, first, one for the Sutton Trust charity, called Chain Effects: the impact of academy chains on low-income students. This is the third in a series which seeks to gauge the success of multi-academy trusts (MATs) by the exam results of disadvantaged pupils on their books. The second, School Performance in multi-academy trusts and local authorities – 2015, is an analysis of results in academy and local authority schools published by a newly-named think tank, the Education Policy Institute (EPI).
The Sutton Trust study produces five exam result measures for 39 MATs, all using the results of each of ‘their’ disadvantaged pupils to pronounce on how well each chain does for these pupils. The EPI paper offers a verdict on the overall performance of academy chains, this one using two exam result measures for pupils which count in official DfE statistics as being educated in these chains.
Both studies, which are statistically much more impressive, say, than a DfE press release – though that may be setting the bar very low indeed – found that the chains varied considerably in terms of their ‘performance’. They therefore garnered media attention for some findings which will not have been welcomed by ministers.
The reports may also be invaluable in another sense. Ministers – and this seems likely to remain the case even with Justine Greening replacing Nicky Morgan as Secretary of State – tend to justify their academies programme largely in terms of institutional exam results. If research considers the academies project on ministers’ own terms and raises serious questions, then that is an important finding.
Problems: teaching to test and inclusion
However, there are two main problems. The first is well-known. It is simply that focusing on exam results as the sole arbiter of success may tell us how effective the institution is at concentrating on performance metrics, but not much about other aspects of education. It may encourage narrow teaching to tests.
Despite the multiple measures used, both of these reports seem to encourage one-dimensional verdicts on which are the ‘best’ academy trusts: the ones which manage to see the pupils who are included in the indicators which the research uses – in the case of the Sutton Trust research, disadvantaged pupils, and in the EPI study, pupils as a whole – achieving the best results.
Yet the reality, it seems to me, is much more complex. A prominent academy chain, which runs schools near where I live, has been known to do well in statistical assessments of its results. Yet some parents I speak to seem not to want to go near it, because of a hard-line approach to pupil discipline and a reportedly test-obsessed outlook. This may generate the results prized in studies such as these, but are these schools unequivocally better than others? I think researchers should at least acknowledge that their results may not be the final word on what counts as quality. My hunch is that these studies may be picking up on academy trusts which are more successful in managing the process of getting good results for their institutions. But is that the same as providing a generally good, all-round education for all those they might educate? The reports offer no answers because they are purely statistical exercises which do not investigate what might be driving changes in results. So we need at least to be cautious with interpretation.
This is especially the case when we move on to perhaps the less obvious concern about these studies. It is that both investigations focus entirely on results at institutional level, counting the success of schools in getting good results out of those pupils who are on their books at the time the statistical indicators are compiled. However, this ignores a potentially serious perverse incentive of England’s results-based, increasingly deregulated education system.
The studies seem entirely uncurious about what is often put to me, by observers of its effects on the ground, as a very serious risk inherent in the academies scheme as currently understood. This is that in deregulating such that each academy trust is given a degree of autonomy, coupled with the pressure on each trust to improve its results, a perverse incentive is created for trusts to become less inclusive.
In other words, they either use admissions to take on more pupils who are likely to help their results, or they try to push out students who are already on their books but less likely to help their results. This concern is referenced in the research review I carried out for CPRT. This quotes a finding from the Pearson/RSA 2013 review of academies which said: ‘Numerous submissions to the Commission suggest some academies are finding methods to select covertly’. The commission’s director was Professor Becky Francis, who is a co-author of the Sutton Trust study, so it is surprising that the latter paper did not look at changing student composition in MATs.
A statistical approach summing up the effectiveness of individual academy chains entirely through the results of individual chains without any way of checking whether they are becoming more selective does not address this issue.
I admit, here, that I have more reasons to be concerned at the secondary, rather than at the primary, level. Since 2014, I have carried out simple statistical research showing how a small minority of secondary schools have seen the number of pupils in particular year groups dropping sharply between the time they arrive in year seven and when they complete their GCSEs, in year 11.
Indeed, one of the top-performing chains in both these reports – the Harris Federation – has recently seen secondary cohort numbers dropping markedly. Harris’s 2013 GCSE year group was 12 per cent smaller than the same cohort in year 8. The 2015 Harris GCSE cohort was 8 per cent smaller than when the same cohort was in year 7. This data is publicly available yet neither report investigates shrinking cohort size. That is not to say anything untoward has gone on – Harris is also very successful in Ofsted inspections, and has said in the past that pupils have left to go to new schools, to leave the UK or to be home-educated – but it certainly would seem worth looking into.
When the Sutton Trust study mentions ‘[academy] chains that are providing transformational outcomes to their disadvantaged pupils’, its figures are based only on those actually in the chains in the immediate run-up to taking exams. Would the analysis change if it included all those who started out at the schools? We don’t know. The fact that DfE data is available suggesting major changes in pupil cohorts but it seems not to have been looked at is remarkable.
In addition, the fact that high-profile research studies purporting to show the success of organisations are not considering alternative readings of their statistics may incentivise organisations not to think about students which they may consider to be harder to educate. Results measures currently provide an incentive to push such students out.
The lack of curiosity is extra surprising, given that the issue of ‘attrition rates’ – schools losing students – has been live in the debate over the success of one of the largest charter school operators in the US, KIPP schools.
As I’ve said: I don’t think this is just a secondary school issue. It is also a potential problem for any research which seeks to judge the success of primary academies solely with reference to the test results of pupils who remain in schools at the time of calculation of ‘performance’ indicators.
For, with reference to the academies scheme in general, as a journalist delving into goings-on at ground level, I frequently come across claims of schools, for example, not being keen to portray themselves as focusing on special needs pupils – and therefore not to attract such youngsters in the first place – or even trying to ease out children who might present behavioural challenges.
These two reports paint a simple picture of ‘more effective’ and ‘less effective’ academy chains. But the reality I see, based on both published evidence and many conversations on the ground, is rather different. I see a system which incentivises leaders to focus on the need to generate results that are good for the school. But is that always in the best interests of pupils? Should a school which sees rising results, but which also seems to be trying to make itself less attractive to what might be termed harder-to-educate pupils, be seen as a success?
These are very important questions. Sadly, the reports provide no answers.
This is the latest in a series of CPRT blogs in which Warwick Mansell, Henry Stewart and others have tested the government’s academies policy, and the claims by which it is so vigorously pursued, against the evidence. Read them here, and download Warwick’s more detailed CPRT research report Academies: autonomy, accountability, quality and evidence.
Warwick has also written extensively about the side-effects of results pressures in schools, most notably in his book ‘Education by Numbers: the tyranny of testing’ (Methuen, 2007).