For me, this is the question of the moment, with the Education and Adoption Bill, whose first section is on the charmingly-worded but as-yet-undefined term ‘coasting schools’, having started its passage through Parliament.
The bill promises to sweep a new category of these schools into the reach of the ‘intervention’ powers of the Secretary of State, which include issuing academy orders forcing schools into the arms of new sponsors.
In a blunt exchange at Education Questions in the House of Commons this week, Nicky Morgan, the Education Secretary, reminded her Labour shadow, Tristram Hunt, that the definition of coasting schools would not come until part-way through the passage of the bill, at its committee stage. But I’ve already had a steer on its likely content. The Department for Education’s press office has told me that the ‘coasting’ definition will focus on data, and specifically the school’s performance over time. The idea, I was told – these are not my words – was to home in on schools which have failed to fulfil their pupils’ potential.
If ‘coasting schools’ are to be defined entirely in terms of results data, I think this will be the first time that formal intervention powers by central government will have been triggered completely by assessment statistics. This already appears to be a departure both from a promise reportedly made by David Cameron before the election and from the contents of the Conservatives’ election manifesto.
In March, the Daily Mail warned that ‘coasting schools’ would be targeted under a new Conservative government, with the Prime Minister quoted – depressingly, though predictably given our experience of the past 20 years of education policy-making – as ‘waging all-out war on mediocrity’.
However, the definition of ‘coasting’ suggested in that piece was an Ofsted judgement. Schools falling in the inspectorate’s ‘requires improvement’ category would ‘automatically be considered’ for turning into academies. Only if they could demonstrate clear plans for improvement, as judged by the Regional Schools Commissioners – England’s new cadre of officials appointed by the Secretary of State, taking decisions in private – would they avoid a change of leadership. The manifesto backed this up, saying: ‘Any school judged by Ofsted to be requiring improvement will be taken over…unless it can demonstrate that it has a plan to improve rapidly.’
But there were indications post-election that the definition was changing. Now it appears that results statistics are going to be the key driver. And that, of course, has big implications.
First, I think it has repercussions for the very controversial language used. A little diversion might be in order here, into the perhaps simpler realm of football.
Imagine, say, a football team, without any great history of success, which gets promoted to the Premier League one season. In its first year in the top tier, it finishes, say, 12th. This is seen as a big achievement, as the club beats many longer-established, richer outfits and comfortably avoids the relegation that comes with finishing 18th or lower. The following season, results are even better, with a 10th place finish the reward. The next two seasons, consistency seems to have been achieved, with 11th and 13th places secured.
However, any outsider looking only at the club’s end of season position over the years might conclude that it has been drifting. Someone could almost call it a ‘coasting’ club in its last two seasons, based on data alone. But while the possible reaction to the club’s statistical direction of travel – sack the manager – may or may not be right, any implication that it was ‘coasting’ and therefore not trying, would be to over-interpret the results. For faced with that ‘coasting’ slur, the club’s manager and any of his coaching staff or players would be incensed. The manager arrives at his desk at 6.30 every morning, hardly has a holiday in the summer and the attention to detail on the training ground is phenomenal.
But the manager does not have total control over the performances of his players and is up against other teams who may be trying similarly hard. He argues that, in a league where he will never have the budgets of the big clubs, survival in the Premier League is success. While results, then, might suggest non-progress, this is based on anything but a sense that the manager is just taking things easy: it is a real triumph.
In contrast, there was the case of a real Premier League football club recently, again having established itself comfortably in mid-table following promotion, where the manager was said by the club’s board to be too laid-back about training. He was replaced by a former player, who has marginally improved its overall position. ‘Coasting’ might have been a more appropriate word in that case, if the characterisation of the former manager’s attitude was right.
My point is that data alone will never tell you whether a football club, or indeed a school, is ‘coasting’ or not. ‘Coasting’ suggests a lack of effort but all we have, with results data, is a statistical end product: the output numbers. Teachers could be working phenomenally hard, and yet failing to improve results as much as outsiders might wish, because schools, in reality, do not have full control over results. These are, inevitably, subject to unpredictability, from the motivation and ability of pupils to ‘perform’ on the big day to the vagaries of marking. And there may be a sense of a zero-sum game: ‘below-average’ schools will always be penalised, even if all schools are working very hard, if the indicators used are based on comparing one school’s results to others’.
Are we really happy, I wonder, to bandy around a word, with all its dismissive implications for the professionals whom our system has spent years training and paying, and with whom we entrust our children, when we are unsure of its accuracy in individual cases?
So to use the results as indicators of underlying effort is as lazy – is this ‘coasting’ policy merely following the assumptions of its accustomed comfort-zone? – as it is potentially misleading. And, in implicitly being brazenly unconcerned about who gets labelled in this way, policy-makers seem to compound the insult that the word ‘coasting’ undoubtedly provokes in many in the profession.
I have to say, surveying the school accountability regime as I do, that I find it very hard to believe that many, if any, schools can truly be said to be ‘coasting’. There are, surely, already too many penalties for those schools which fail to improve their pupils’ results, starting with the head losing his or her job following a failing Ofsted, for any of them to take it easy, I reason. And surveys of teacher workload surely make unarguable the case that most professionals are putting in very long hours in term-time – and often adding to them in the holidays – often under considerable pressure.
And yet here have we have the phrase ‘coasting schools’ backed not only by the Prime Minister and his Education Secretary, but written on the face of an education bill, in its first clause.
Individuals whom I respect, working more closely with schools than I am, have countered that there are some institutions which are not working as hard as they could to provide the best possible education for their pupils. Fair enough. But my point remains: data alone will not tell us which ones they are, because there is no straight read-across from outcome data to teacher commitment and motivation. This seems to me be to be another example of policy-makers making heroic assumptions of what can be read into results statistics alone.
We will have to wait until we have a definition in full – if, indeed during this bill’s passage, we get all the details which will be used in reality by those taking decisions on schools – in order to judge the technical reliability of the datasets being used. But with the futures of more schools poised to hinge on results statistics, this is likely to place even greater weight on, for example, marking reliability. Can it withstand the pressures being placed upon it? Again, the assumption is always that it can. But national curriculum tests and GCSEs, for example, have not been designed with the intention that institutions’ existence could rest on them.
A final implication should be obvious to anyone who is interested in the unintended consequences of assessment-driven accountability. Allowing schools to be placed as, in the language of the bill, ‘eligible for intervention’ – in other words, available for a management takeover – on the basis of results data alone will, surely, accentuate teaching to the test. With so much riding on performance on a particular set of indicators, the incentive for schools to concentrate even more narrowly on doing whatever it takes to maximise performance on those particular indicators will be underlined. If, on the other hand, the statistical definition of ‘coasting’ is not precise, teaching to particular indicators may be more tricky but then Regional Schools Commissioners stand to be accused of arbitrariness in selecting which schools count as ‘coasting’.
To ministers and those defending these plans, this is all to the good. The ‘war on mediocrity’ really will force institutions and those working in them to raise their game, with the implication that countless previous reforms in the same vein have not fully succeeded in doing so. Labelling schools, then, as ‘coasting’ – even if the label is in some cases inaccurate – is not a problem and will just reinvigorate professionals who need a bit of a push. And focusing on particular indicators is fine, as these centrally-defined metrics will just spur teachers to prioritise aspects of education which are important.
To this observer, who sees teachers for the hard-working, often stressed individuals they are, and wonders about the message being sent to this and the next generation of professionals about their efforts and about the alienation of policy-making from its implications on the ground, there is a sense of despair.
As ever, and as evidenced and articulated by the Cambridge Primary Review, the hope is that professionals can still educate pupils well in spite of policy-making, rather than because of it.
Warwick Mansell, one of CPRT’s regular bloggers, is a freelance journalist and author of ‘Education by Numbers: the tyranny of testing’ (Methuen, 2007).