The Cambridge Primary Review Trust

Search

  • Home
    • CPRT national conference
    • Blog
    • News
  • About CPRT
    • Overview
    • Mission
    • Aims
    • Priorities
    • Programmes
    • Priorities in Action
    • Organisation
    • People
      • National
    • Professional development
    • Media
  • CPR
    • Overview
    • Remit
    • Themes
    • Themes, Perspectives and Questions in Full
    • Evidence
    • People
    • CPR Publications
    • CPR Media Coverage
    • Dissemination
  • Networks
    • Overview
    • Schools Alliance
  • Research
    • Overview
    • CPRT/UoY Dialogic Teaching Project
    • Assessment
    • Children’s Voice
    • Learning
    • Equity and Disadvantage
    • Teaching
    • Sustainability and Global Understanding
    • Vulnerable children
    • Digital Futures
    • Demographic Change, Migration and Cultural Diversity
    • Systemic Reform in Primary Education
    • Alternative models of accountability and quality assurance
    • Initial Teacher Education
    • SW Research Schools Network
    • CPR Archive Project
  • CPD
  • Publications
  • Contact
    • Contact
    • Enquiries
    • Regional
    • School
    • Media
    • Other Organisations

November 25, 2016 by Warwick Mansell

Education in spite of policy: further reflections on the 2016 CPRT conference

It encapsulated probably the defining contrast I have seen in nearly 20 years covering education: the under-rated commitment and thoughtfulness of much of the teaching profession versus the endless dysfunction, self-centredness and dishonesty of policymakers and the policy process itself.

Here, in the day-long get-together that was the Cambridge Primary Review Trust’s 10th anniversary conference last Friday in London, was an event to convince any observer of the multi-layered professionalism present at least in potential in England’s schools system.

Yet central to the day’s valedictory keynote by Robin Alexander – he is stepping down at the end of next month after 10 years as this remarkable review’s guiding presence – was the force against which the profession seems so often to be battling. This is the largely shallow, frequently failing and usually self-referential Westminster/Whitehall/think tank policy-spewing machine.

‘Education in spite of policy’ was the strapline to Robin’s speech. This is about as good a five-word summary of the state of play in English schooling in 2016 as it gets.1

‘Ever since the 1988 Education Reform Act started transferring hitherto devolved powers from local authorities and schools to Westminster, policy has become ever more inescapable, intrusive and impervious to criticism,’ he said.

What was needed, then, was not more ‘education reform’ but reform of the policy process itself. Hear, hear.

The Cambridge Primary Review’s Final Report, published in 2009, was unflinchingly critical of the above characteristics in a Labour government which, Robin reminded us, sent documents on the teaching of literacy at the rate of roughly one a week to primary schools in the seven years to 2004. Yet there were some aspects which contributors to the review had welcomed: Labour’s Children’s Plan, Sure Start, Narrowing the Gap and the expansion of early childhood care and education.2

The more relevant question now is whether policymaking has worsened since 2010. While Robin welcomed the concept of the pupil premium, he said the current grammar schools proposal flew in the face of evidence, dating back as far as the 1960s, as to its likely damaging impact on those not selected. ‘To have two initiatives from the same government department pulling in opposite directions, both in the name of narrowing the gap, is bizarre. But hey, that’s policy.’

On four of CPRT’s priorities – aims, curriculum, pedagogy and assessment – policy is worse in 2016 than when the report was published in 2009, he suggested. ‘Aims remain a yawning gap between perfunctory rhetoric and impoverished political reality. The new national curriculum is considerably less enlightened than the one it replaced … national assessment … is now even more confused and confusing than it was; and most government forays into pedagogy are naïve, ill-founded and doctrinaire.’

Policymakers can also be a very bad advert for the concept of education in itself, at least when they step away from the soothing rhetoric. Robin reminded us of this with reference to Michael ‘had enough of experts’ Gove and his famous observation that those teachers and academics who disagreed with him were ‘enemies of promise’ and Marxists ‘hell bent on destroying our schools.’

Listening to the speech, and sitting in on a couple of seminars and the day’s final plenary, was to be reminded of another contrast: between the decades of experience many contributors to the conference had to offer and the callowness of those often now shaping policy. I am loth to personalise, but to listen to Robin and to set his isolation from substantial involvement in policy 3 against the likes of Rachel Wolf, now opining on ‘the next round of education reform’ and the revelation that policymakers ‘must focus on what goes on inside the classroom’ a few years into a career almost entirely free of experience outside the policy bubble is to despair.4

So what of the depth elsewhere in the conference? I was fascinated by talks on the merits of philosophy in primary schools; and on the phenomenally popular, Cambridge University-based NRICH maths programme, whose director, Ems Lord, asked the provocative question: ‘is [maths] mastery enough?’ I found presentations on the ideas behind Learning without Limits,5 by academics at the universities of Cambridge, East Anglia and Edinburgh, as about the most thought-provoking I have heard.

And the final plenary, offering the thoughts of author/journalist Melissa Benn, another distinguished academic in Andrew Pollard and a headteacher in Sarah Rutty, offered much good sense. I was taken by Melissa’s description of a ‘brilliant’ – ie it sounded great – speech in 2013 by Gove, on the subject of primary education, which nevertheless showed a ‘wilful ignorance of the history of education’; welcome to post-truth politics. I was also struck by Andrew’s notion of evidence-informed, rather than evidence-based education, as the former implied the use of value judgement, which was important. However, in relation to policy, in stating that the Department for Education runs ‘consultations which turn out to be pseudo-consultations’, he reminded us how distant any kind of evidence can often feel from the directives.

Finally, Sarah launched into a quickfire, and bleakly humorous, tour de force on what it felt like to be on the end of policy suffused by a ‘lack of trust, lack of empathy, lack of joined-up thinking’, including those endless, and sometimes, she suggested, borderline incomprehensible missives from the Standards and Testing Agency about assessment changes.

‘As a headteacher, I feel a bit bullied if I’m honest. The government are not listening to our voices. They are certainly not listening to the voices of the children,’ she said.

The title of the final report of the Cambridge Primary Review, of course, was ‘Children, their World, their Education.’ Yet policy, in imposing constant change on schools because this fits both its own internal logic and the political needs of those in charge, staggeringly rarely, in reality, stops to consider the effects on those it is meant to help.

If it did, why would it have introduced major increases in the number of children likely to be deemed failing at 11 as a result of changes in the national assessment and curriculum systems without, as far as I know, having carried out any impact study as to the possible effects on pupils?

If it did, why would it have tried to force major disruptive and expensive structural change on thousands of primary schools without any good evidence that this will help pupils?

If it did, why would it publish a green paper on increasing selection without, seemingly, any consideration of the potential impact on pupils not deemed academic enough to pass a selective test?

Professionalism in spite of policy remains, sadly, the only hope for England’s schools.

Warwick Mansell is a freelance journalist and author of ‘Education by Numbers: the tyranny of testing’ (Methuen, 2007) and the recent CPRT report ‘Academies: autonomy, accountability, quality and evidence‘  (May 2016). Read more CPRT blogs by Warwick here.

 

1 – As is also implicit in a blog I wrote in the spring.

2 – CPR was not alone in this view. Another major review, the Nuffield Review of 14-19 Education and Training, England and Wales led by Richard Pring, also concluded in 2009. It investigated the notion that ‘there have been too many fragmented and disconnected interventions by government which do not cohere in some overall sense of purpose’.

3 – He has reminded me that as well as the 1991-2 ‘three wise men’ enquiry’ he has served on quangos such as CATE and QCA while his persistence over spoken language, in the face of that notorious ministerial objection that classroom talk is no more than ‘idle chatter’, succeeded in getting it reinstated, albeit reluctantly, in the current national curriculum. But my general point stands: on the one hand we have the rich but largely untapped experience and expertise that this conference brought together in abundance; on the other the supplanting of such experience and expertise by ideologically compliant special advisers and ‘expert groups’.

4 – Among several remarkable claims in Rachel Wolf’s blog is that ‘too many schools still resist testing as an “evil”’.   Really? No, they’d no doubt like to resist some of the more damaging impacts of high-stakes testing, but policymaking hangs all on test results, so…

5 – The papers on Learning without Limits will be on the programme’s website from next week.

Filed under: Aims, assessment, Cambridge Primary Review Trust, conference 2016, DfE, evidence, pedagogy, policy, primary teaching, Robin Alexander, Warwick Mansell

October 21, 2016 by David Reedy

Assessment, testing and accountability: a suggestion for an alternative framework

The data from the new 2016 tests for 11 year olds in England is gradually trickling out. We have been informed that 48 percent of the children did not reach the new expected standards in reading, writing and mathematics combined (compared to 35 percent in 2015 under the old system) and are at risk of being labelled ‘failures’. In addition, the calculations have been done to identify each Y6 child’s scaled score and progress measure. Parents have been told something like ‘In reading your child got 99 on the scaled score against the expected standard and 1.6 progress score’. Not terrifically helpful, particularly if the parent has become familiar with Levels over the last 28 years.

Combined with the anecdotal evidence about the problems children had with the reading test, and the abandonment of the grammar test for seven year olds after it was inadvertently leaked, it is no surprise that more and more educationists, parents and organisations are calling for a fundamental review.

I have written in previous blogs about the current system and its shortcomings, now exacerbated by the 2016 experience, drawing on Wynne Harlen’s 2014 research report for CPRT Assessment, Standards and Quality of Leaning In Primary Education which outlines the evidence concerning the impact of high stakes testing and compares England’s system with those of a number of other countries. Harlen’s key point that ‘the system …. for primary schools in England still suffers from over-dependence on testing and the use of end of Key Stage 2 tests for too many purposes’ (p. 32) indicates that we must consider a fundamentally different approach .

In this blog I outline the key strands which I think would need to be considered  under any review, with some suggestions concerning what should be incorporated, based on the available evidence.

The three strands for a comprehensive system of assessment and accountability are at individual child level, school level and national level.

At individual child level the focus must be assessment for learning and assessment of learning (i.e. formative and summative assessment). Assessment must be used to help children while they are learning and to find out what they have learned at a particular point in time.  Testing can be a part of this as it can inform overall teacher assessment and help to identify any potential gaps in learning.  However tests cannot give all the information needed to take a rounded view of what children need to learn and what they know and can do. As Harlen states: ‘the evidence shows that when teachers draw on a range of sources of evidence, then discuss and moderate with other teachers, assessment is more accurate’. Depending on the score from an externally marked, single test of reading at 11, for example, to identify reading ability is simply not enough evidence to make a reliable judgement.

As a first move in this direction, the system currently used for seven year olds should be adopted at the end of KS2; teacher assessment based on a range of evidence, including but not determined by a formal test.

In addition the plethora of evidence-based assessment resources available should be utilised to underpin an approach that is qualitative as well as quantitative. For example there are the CLPE/UKLA et al Reading and Writing Scales which can be used for identifying children’s progress as well as indicating next steps for learning. It is also worth looking at the end of each of these scales where there is an extensive bibliography showing how they are firmly based in research evidence. Something DfE might consider doing.

In summary, the principle that assessment of any kind should ultimately improve learning for children is central and should be the criterion against which all assessment practices in and beyond school should be judged.

At school level the focus must be on partnership in assessment as well as accountability. Firstly, that means not only being accountable to parents and the local community the school serves, but also working systematically with them as partners.

Parents have a key role to play in assessment which goes beyond being regularly reported to and includes the sharing of information about the progress of their children both within and beyond school to obtain a fully informed picture. This would be followed by discussions concerning what the school is doing more generally to promote learning across all aspects of learning.

Schools should hold themselves to account through systematic self evaluation. This self evaluation should be externally moderated by local partners, crucially through strengthened local authorities, and nationally through a properly independent HMI. However the system should not feel, as it does to many schools under the current arrangements, as punitive, but developmental and supportive, including when a school is not doing as well as it should.  Any moderated self evaluation should be formative for the school as well as demonstrating accountability.

CPRT responded by making assessment reform one of its eight priorities, aiming to

Encourage approaches to assessment that enhance learning as well as test it, that support rather than distort the curriculum and that pursue standards and quality in all areas of learning, not just the core subjects.

CPRT’s Priorities in Action assessment webpage lists our multifaceted response to this priority including reports, briefings, blogs, parliamentary and government submissions and purpose-designed CPD for schools.

The final report of the Cambridge Primary Review was also clear that inspection needed to change (p. 500) and recommended that a new model be explored which focussed much more on classroom practice, pupil learning and the curriculum as a whole.

In any review of assessment, the accountability system must be reviewed at the same time. That goes for accountability at national level too.

Current arrangements at primary level are both narrow, only focusing on some aspects of core subjects, and useless for making comparisons across time as the criteria and tests keep changing. A system of sample surveys should be formulated to monitor national standards. These would be based on a large number of components and be able to extend well beyond the core subjects if a rolling programme was organised. England would then be able to judge whether primary education as a whole, in all its aspects, based on a comprehensive set of aims, was being successful and was improving over time. Currently this is impossible to do.

Thus is not surprising that more and more people and organisations are, alongside CPRT, calling for a fundamental review of assessment, testing and accountability and that a major campaign is about to get underway. This campaign is to be called ‘More than A Score’ and a major conference has been announced for December 3rd. CPRT fully supports this campaign.

This move to a more effective approach would not be a simple process. As CPR’s final report stated in 2010 ‘Moving to a valid, reliable and properly moderated procedures for a broader approach to assessment will require careful research and deliberation’ (p. 498)

It will take some time, but I believe, for all involved, it will be well worth the effort.

Just as this blog was being prepared, Justine Greening, Secretary of State for Education, made an announcement about primary school assessment. This included a commitment to ‘ setting out steps to improve and simplify assessment arrangements’, the abandonment of Y7 resits, and no new tests to be introduced before the 2018/19 academic year. There is a welcome acknowledgement in the tone of the statement that current arrangements are not working, although the last point has alarming implications about the introduction of further, unnecessary, high stakes tests.

The Secretary of State also announced another consultation, to take place next year, on assessment, testing and accountability. We have seen many of these so called ‘consultations’ before where the views of educationalists and the evidence from research and experience have been completely ignored.

Another ‘consultation’ is not needed, What is needed is a thorough, independent, review where all stakeholders are represented and a government that is prepared to listen and respond positively.

David Reedy is a co-Director of the Cambridge Primary Review Trust, and General Secretary of the United Kingdom Literacy Association.

Filed under: accountability, assessment, Cambridge Primary Review Trust, David Reedy, DfE, England, inspection, tests

July 15, 2016 by Warwick Mansell

Academies: statisticians need to raise their game

Two major reports on the effectiveness of the government’s central education policy – turning schools into academies, preferably in chains – have been published in the past two weeks. But do they get to the truth of the policy? Not remotely, I think. I say that even though the reports serve a useful public interest function in holding ministers to account.

The central problem with these reports is that they see the success or not of the academies scheme entirely through the lens of the test and exam results either of individual institutions, or of institutions grouped together in chains or, more loosely, in local authorities. Although this approach purports to offer an ‘objective’ insight into the quality of academies, and by extension the success of the policy itself, in fact it has some serious problems.

The methodology

The two studies I highlight here are, first, one for the Sutton Trust charity, called Chain Effects: the impact of academy chains on low-income students. This is the third in a series which seeks to gauge the success of multi-academy trusts (MATs) by the exam results of disadvantaged pupils on their books. The second, School Performance in multi-academy trusts and local authorities – 2015, is an analysis of results in academy and local authority schools published by a newly-named think tank, the Education Policy Institute (EPI).

The Sutton Trust study produces five exam result measures for 39 MATs, all using the results of each of ‘their’ disadvantaged pupils to pronounce on how well each chain does for these pupils. The EPI paper offers a verdict on the overall performance of academy chains, this one using two exam result measures for pupils which count in official DfE statistics as being educated in these chains.

Both studies, which are statistically much more impressive, say, than a DfE press release – though that may be setting the bar very low indeed – found that the chains varied considerably in terms of their ‘performance’. They therefore garnered media attention for some findings which will not have been welcomed by ministers.

The reports may also be invaluable in another sense. Ministers – and this seems likely to remain the case even with Justine Greening replacing Nicky Morgan as Secretary of State – tend to justify their academies programme largely in terms of institutional exam results. If research considers the academies project on ministers’ own terms and raises serious questions, then that is an important finding.

Problems: teaching to test and inclusion

However, there are two main problems. The first is well-known. It is simply that focusing on exam results as the sole arbiter of success may tell us how effective the institution is at concentrating on performance metrics, but not much about other aspects of education. It may encourage narrow teaching to tests.

Despite the multiple measures used, both of these reports seem to encourage one-dimensional verdicts on which are the ‘best’ academy trusts: the ones which manage to see the pupils who are included in the indicators which the research uses – in the case of the Sutton Trust research, disadvantaged pupils, and in the EPI study, pupils as a whole – achieving the best results.

Yet the reality, it seems to me, is much more complex. A prominent academy chain, which runs schools near where I live, has been known to do well in statistical assessments of its results. Yet some parents I speak to seem not to want to go near it, because of a hard-line approach to pupil discipline and a reportedly test-obsessed outlook. This may generate the results prized in studies such as these, but are these schools unequivocally better than others? I think researchers should at least acknowledge that their results may not be the final word on what counts as quality. My hunch is that these studies may be picking up on academy trusts which are more successful in managing the process of getting good results for their institutions. But is that the same as providing a generally good, all-round education for all those they might educate? The reports offer no answers because they are purely statistical exercises which do not investigate what might be driving changes in results. So we need at least to be cautious with interpretation.

This is especially the case when we move on to perhaps the less obvious concern about these studies. It is that both investigations focus entirely on results at institutional level, counting the success of schools in getting good results out of those pupils who are on their books at the time the statistical indicators are compiled. However, this ignores a potentially serious perverse incentive of England’s results-based, increasingly deregulated education system.

The studies seem entirely uncurious about what is often put to me, by observers of its effects on the ground, as a very serious risk inherent in the academies scheme as currently understood. This is that in deregulating such that each academy trust is given a degree of autonomy, coupled with the pressure on each trust to improve its results, a perverse incentive is created for trusts to become less inclusive.

In other words, they either use admissions to take on more pupils who are likely to help their results, or they try to push out students who are already on their books but less likely to help their results. This concern is referenced in the research review I carried out for CPRT. This quotes a finding from the Pearson/RSA 2013 review of academies which said: ‘Numerous submissions to the Commission suggest some academies are finding methods to select covertly’. The commission’s director was Professor Becky Francis, who is a co-author of the Sutton Trust study, so it is surprising that the latter paper did not look at changing student composition in MATs.

A statistical approach summing up the effectiveness of individual academy chains entirely through the results of individual chains without any way of checking whether they are becoming more selective does not address this issue.

I admit, here, that I have more reasons to be concerned at the secondary, rather than at the primary, level. Since 2014, I have carried out simple statistical research showing how a small minority of secondary schools have seen the number of pupils in particular year groups dropping sharply between the time they arrive in year seven and when they complete their GCSEs, in year 11.

Indeed, one of the top-performing chains in both these reports – the Harris Federation – has recently seen secondary cohort numbers dropping markedly. Harris’s 2013 GCSE year group was 12 per cent smaller than the same cohort in year 8. The 2015 Harris GCSE cohort was 8 per cent smaller than when the same cohort was in year 7. This data is publicly available yet neither report investigates shrinking cohort size. That is not to say anything untoward has gone on – Harris is also very successful in Ofsted inspections, and has said in the past that pupils have left to go to new schools, to leave the UK or to be home-educated – but it certainly would seem worth looking into.

When the Sutton Trust study mentions ‘[academy] chains that are providing transformational outcomes to their disadvantaged pupils’, its figures are based only on those actually in the chains in the immediate run-up to taking exams. Would the analysis change if it included all those who started out at the schools? We don’t know. The fact that DfE data is available suggesting major changes in pupil cohorts but it seems not to have been looked at is remarkable.

In addition, the fact that high-profile research studies purporting to show the success of organisations are not considering alternative readings of their statistics may incentivise organisations not to think about students which they may consider to be harder to educate. Results measures currently provide an incentive to push such students out.

The lack of curiosity is extra surprising, given that the issue of ‘attrition rates’ – schools losing students – has been live in the debate over the success of one of the largest charter school operators in the US, KIPP schools.

As I’ve said: I don’t think this is just a secondary school issue. It is also a potential problem for any research which seeks to judge the success of primary academies solely with reference to the test results of pupils who remain in schools at the time of calculation of ‘performance’ indicators.

For, with reference to the academies scheme in general, as a journalist delving into goings-on at ground level, I frequently come across claims of schools, for example, not being keen to portray themselves as focusing on special needs pupils – and therefore not to attract such youngsters in the first place – or even trying to ease out children who might present behavioural challenges.

These two reports paint a simple picture of ‘more effective’ and ‘less effective’ academy chains. But the reality I see, based on both published evidence and many conversations on the ground, is rather different. I see a system which incentivises leaders to focus on the need to generate results that are good for the school. But is that always in the best interests of pupils? Should a school which sees rising results, but which also seems to be trying to make itself less attractive to what might be termed harder-to-educate pupils, be seen as a success?

These are very important questions. Sadly, the reports provide no answers.

This is the latest in a series of CPRT blogs in which Warwick Mansell, Henry Stewart and others have tested the government’s academies policy, and the claims by which it is so vigorously pursued, against the evidence. Read them here, and download Warwick’s more detailed CPRT research report Academies: autonomy, accountability, quality and evidence.

Warwick has also written extensively about the side-effects of results pressures in schools, most notably in his book ‘Education by Numbers: the tyranny of testing’ (Methuen, 2007).

Filed under: academies, assessment, Cambridge Primary Review Trust, equity, evidence, policy, school effectiveness, tests

May 27, 2016 by David Reedy

Time for radical change: grammar testing in England’s primary schools  

It has not been a good couple of weeks for testing in England’s primary schools.

There have been leaks of both the KS1 and KS2 spelling, grammar and punctuation tests, leading to the KS1 test being scrapped for this year and accusations by ministers that malign forces are at work to undermine the government’s education reform process.

Baseline assessment for four year olds has also gone, as its unreliability for accountability purposes became so obvious that continuation became untenable. (Not that the problems with testing and accountability are unfamiliar to teachers or parents, as Stephanie Northen and Sarah Rutty reminded us in their powerful recent blogs).

Even before his problem with subordinating conjunctions, Nick Gibb was complaining about the current situation in a speech at the ASCL curriculum summit on 27 April:

You do not need me to tell you that the implementation of the new key stage one and key stage two tests has been bumpy, and I and the department are more than willing to accept that some things could have been smoother. The current frameworks for teacher assessment, for example, are interim, precisely because we know that teething problems that exist in this phase of reform need to leave room for revision.

‘Teething problems’ is a bit of an understatement.

The Cambridge Primary Review Trust is committed to looking at what the widest range of available evidence tells us about assessment and assessment reform, including from experience such as Stephanie’s and Sarah’s as well as formal research, and it argues that decisions should be made at both policy and classroom level based upon that evidence.

I want to briefly look at the research evidence on the grammar tests for seven and eleven year olds and the government’s claims for them, to complement and add to the blogs of the last fortnight.

Nick Gibb argued in his ASCL speech, as well as on earlier occasions, that testing is a way of raising standards in the core areas of reading, writing and mathematics. He said:

Against those who attack the underlying principle of these reforms, I stand firm in my belief that they are right and necessary. Our new tests in grammar, punctuation and spelling have been accused by many in the media of teaching pupils redundant or irrelevant information. One fundamental outcome of a good education system must be that all children, not just the offspring of the wealthy and privileged, are able to write fluent, cogent and grammatically correct English.

He thus conflates performance in these tests with writing fluently and cogently. But the evidence that a test will help the children to get better at writing when it asks six and seven year olds to identify an adverb in ‘Jamie knocked softly on his brother’s bedroom door’ or to decide whether ‘One day, Ali decided to make a toy robot’ is a question, statement, command or an exclamation, simply doesn’t exist. The experience of this year’s Y2 and Y6 children, before the requirement to do the Y2 test was dropped, was in many cases, that of separate grammar lessons where they were trained for the test, making sure they could identify word classes and sentence types through decontextualised exercises, so that they would be able to answer questions like these. If the test is reintroduced in 2017 this will happen again, distorting the curriculum with little or no benefit to pupils.

I make this claim because the research evidence over many years is unequivocal. Debra Myhill, who with her colleagues at Exeter University has extensively investigated the teaching of grammar and has shown that explicit attention to grammar in the context of ongoing teaching can help pupils to improve their writing, summarised that evidence in an April 2013 TES article. She wrote:

I did a very detailed analysis of the test and I had major reservations about it. I think it’s a really flawed test. The grammar test is totally decontextualised. It just asks children to do particular things, such as identifying a noun. But 50 years of research has consistently shown that there is no relationship between doing that kind of work and what pupils do in their writing. I think children will do better in the test than they are able to in their writing because it isolates the skills so that children only have to think about one thing at a time.

Myhill adds that the test will tend to overestimate children’s ability to manipulate grammar and make appropriate choices in their writing.  It would be much more valid to assess children’s ability to manipulate grammar by looking at how they do so in the context of the pieces of writing they do in the broad curriculum they experience. This test is therefore unreliable. It is also invalid.

In her CPRT research report on assessment and standards Wynne Harlen defines consequential validity as ‘how appropriate the assessment results are for the uses to which they are put’. A test which focuses on labelling grammatical features may be valid in testing whether children know the grammatical terms, but it is not valid for making judgements about writing ability more generally. The evidence emphatically does not support Nick Gibb’s claim that the test will lead to ‘fluent, cogent and grammatically correct English’. These grammar tests will not and cannot do what the government’s rhetoric claims.

The Cambridge Primary Review Trust, like the Cambridge Primary Review, supports the use of formal assessments, in which tests have a role, as part of a broader approach to identifying how well children are learning in school and how well each school is doing, though like many others it warns against overloading such assessments with tasks like system monitoring. Wynne Harlen’s reports for CPR and CPRT, and the assessment chapters (16 and 17) in the CPR final report, remain excellent places to examine the evidence for a thoroughgoing review of the current assessment and accountability arrangements, including the place of testing within them, in England’s primary schools.

As I reminded readers in a previous blog the Cambridge Primary Review in 2010 cited assessment reform as one of eleven post-election priorities for the incoming government. Six years and a new government later,  a fundamental review of assessment and testing is still urgently needed.

Assessment reform remains a key CPRT priority. For a round-up of CPR and CPRT evidence on assessment see our Priorities in Action page. This contains links to Wynne Harlen’s CPR and CPRT research reports mentioned above, relevant blogs, CPRT regional activities, CPR and CPRT evidence to government consultations on assessment, and the many CPR publications on this topic

David Reedy is a co-Director of the Cambridge Primary Review Trust, and General Secretary of the United Kingdom Literacy Association.

 

Filed under: assessment, Cambridge Primary Review Trust, David Reedy, DfE, England, grammar, tests

May 20, 2016 by Sarah Rutty

Joyless, inaccurate, inequitable?

I recently enjoyed the opportunities provided by some longer than average train journeys and the al fresco possibilities of a sunny garden to catch up on my reading. Indeed, I diligently increased my familiarity with a wide range of books; asked questions to improve my understanding of text; summarised the main ideas drawn from more than one paragraph, and worked out the meaning of new words from context.  In short, I demonstrated the skills in reading required of upper KS2 readers.

Which has left me with rather a bee in my bonnet about last week’s KS2 SATs reading paper and its usefulness as an assessment of these skills.

My first buzzing bee in response to the paper: the quality and range of the texts provided to assess our children’s abilities as confident readers. Rather than a range of engaging writing, offering opportunities to demonstrate skills as joyful interrogators of literature and authorial craft, the test offered three rather leaden texts: two fictional and one non-fiction.  We had Maria and Oliver running off from a garden party at the big house to explore an island, which might hold the clue to the secret of a long-standing upper-class family feud. We had Maxine riding her pet giraffe, Jemmy, in South Africa, having an unfortunate encounter with some warthogs (some ferocious, others bewildered) but fortunately learning a lesson about the consequences of not listening to adults. We finished with the non-fiction text about the demise of the Dodo, a text so oddly structured that it appeared to have, rather like another curious creature, been thrown together by committee. The sun-soaked stillness of our inner-city school hall, last Monday morning, was ruffled by the occasional gentle gusting sighs of 76 children trying hard to engage with such dull texts and do so with purposeful determination ‘because I love books and I love reading and I want to do well, but it wasn’t like proper reading.’

Which brings me on to my second buzzing bee: it was most definitely not, to quote (year 6 standard pupil Shueli), anything like ‘proper reading’ nor, I would suggest, a meaningful way to assess whether our children themselves are ‘proper’ readers, using the DfE’s own interim assessment criteria.

The first four questions of the test focussed solely on vocabulary and words in context. For example, Question 1: ‘Find and copy one word meaning relatives from long ago’. If, like many of our children, you did not know the word ‘ancestor’, the answer for this question was almost impossible to work out from context.  A first mark lost and a tiny dent in the self-esteem of pupils who were hoping for a test of their ability to filter and finesse a text for nuance and meaning rather find ‘words I should have in my head, but didn’t’ (Sayma B). More gusty sighing.

Question 2 continued to dig deeper into the realm of internal word-lists: ‘the struggle had been between two rival families… which word most closely matches the meaning of the word rival? Tick one: equal, neighbouring, important, competing.  If you were not familiar with the word ‘rival’ then the choice of either ‘important’ or ‘neighbouring’ are plausible choices in context. I give you some higher order reading reasoning:  the children were at a party in a big house, clearly from ‘posh’ families – hence ‘important’ was a perfectly sensible choice; rival football teams play in the same league, so are in some way ‘neighbours’.  Both demonstrate a key year 6 reading skill: ‘working out the meaning of new words from context’, a skill our children use routinely but, in this case, cost them a mark and one more cross gained on the examiner’s recording sheet.

Bringing me onto bee no. 3: the test appeared to be designed for ease of marking. Only 2/33 questions on the test required extended ‘3 mark’ answers – allowing extended inferential or evaluative thinking – a mere 6 percent of the paper. The rest were questions requiring – much easier to mark – word or fact retrieval answers.  Our children’s reading SATs scores will reflect this unbalanced diet of question types; resulting in assessments neither accurate nor equitable. Not accurate, because teachers, using the national curriculum and 2015-16  interim assessment framework, assess year 6 readers using a much wider set of criteria – including, for example reading aloud with intonation, confidence and fluency, as well as contributions to discussions around book-talk, none of which can be assessed  by a simple test. And not equitable, because research indicates that the children most likely to under-perform in language/vocabulary biased reading tests are those from the most deprived backgrounds.

The reason for this is that children from lower income, or more socially deprived backgrounds, often come to school with a more limited vocabulary because they begin life being exposed to fewer words than children from more affluent backgrounds.  The gap this discrepancy presents is not insurmountable; the CPRT/IEE dialogic teaching project is one clear example of how putting talk at the heart of our children’s learning can help close such gaps.  However, a national testing system that skews the reading results by which children and schools are judged – and categorised – in favour of such a vocabulary-heavy bias, is simply not fair. Or purposeful.

I urge you, experienced reader, to stand for a moment in the shoes of Sheuli and Sayma B. I give you a sentence to consider, one which incorporates a word that I learnt from my own recent reading.  ‘A gust of wind rippled through the exam hall, it made me pandiculate and look hopefully at the clock. Q1: In this sentence which word most closely matches the meaning of the word pandiculate? Tick one: ponder, panic, stretch, laugh out loud.

All might seem plausible choices. The experience of the reading SATs last week may have caused our children indeed to ponder, to panic or to laugh out loud in test conditions.  It might even have made them pandiculate in earnest, for the correct answer is, of course, c) to stretch – and typically to yawn when awakening from a dull or sleepy interlude. But surely you knew that? It must be fair to assume that we all share the same internal word list. And if this is not the case (shame on you) could you not demonstrate your ability to work out the meaning of a word from the context?  No?  It cannot be that my test is flawed; it must be you who are a poor reader.  My internal bee is susurrating indeed about the value of a national test that reinforces gaps, rather than one which assesses how well we are closing them.

Sarah Rutty is Head Teacher of Bankside Primary School in Leeds, part-time Adviser for Leeds City Council Children’s Services, a member of CPRT’s Schools Alliance, and Co-ordinator of CPRT’s Leeds/West Yorkshire network. Read her previous blogs here. 

 If you work in or near Leeds and wish to become involved in its CPRT network, contact administrator@cprtrust.org.uk.

Filed under: assessment, Cambridge Primary Review Trust, equity, national curriculum, reading, Sarah Rutty, Sats, social disadvantage, tests

May 6, 2016 by Stephanie Northen

Rigor spagis

Amid the gloom of unsavoury Sats and enforced academisation, comes one delicious moment of joy. Schools minister Nick Gibb doesn’t know his subordinating conjunctions from his prepositions. He can’t answer one of the questions he has set children. Despite this woeful (in his eyes) ignorance – though, tellingly, when his mistake is pointed out he says ‘This isn’t about me’ – he has managed to become and to remain a government minister. Need one say any more about the pointlessness of the Spag test?

At least by this time next week it will all be over. The country’s 10 and 11-year-olds will be free to enjoy their final few weeks at primary school, liberated from the government’s oh so very rigorous key stage 2 tests. Like them, I am tired of fractions, tired of conjunctions, tired, in fact, of being told of the need for ‘rigour’. The Education Secretary and the Chief Inspector need to wake up to the fact that rigour is a nasty little word, suggestive of starch and thin lips. Its lack of humour and humanity makes parents and teachers recoil. Check out its origins in one of those dictionaries you recommend children use.

Hopefully the weight of protest here, echoing many in America, will force some meaningful concessions from the ‘rigour revolutionaries’ in time for next year’s tests. Either that, or everyone with a genuine interest in helping young children learn will stand up and say No.  In the words of CPRT Priority 8, Assessment must ‘enhance learning as well as test it’, ‘support rather than distort the curriculum’ and ‘pursue standards and quality in all areas of learning, not just the core subjects’. The opposite is happening at the moment in the name of rigour. It’s not rigour – but it is deadly.

Of course, the memory of subordinating conjunctions and five-digit subtraction by decomposition will fade for the current Year 6s – and for Nick Gibb – unless they turn out to have failed the tests. Mrs Morgan will decide just how rigorous she wants to be in the summer. Politics will determine where she draws the line between happy and sad children. Politics will decide the proportion she brands as failures at age 11, forced to do the tests again at secondary school.

But still the children have these few carefree weeks where primary school can go back to doing what primary school does best – encouraging enquiry into and enjoyment of the world around us. Well, no. Teachers still have to assess writing. And if my classroom is anything to go by, writing has been sidelined over the past few weeks in the effort to cram a few more scraps of worthless knowledge into young brains yearning to rule the country.

So how do we teachers judge good writing? Sadly, that’s an irrelevant question. Don’t bother drawing up a mental list of, for example, exciting plot, imaginative setting, inventive language, mastery of different genres. No, teachers must assess using Mrs Morgan’s leaden criteria, criteria that would never cross the mind of a Man Booker prize judge. Marlon James, last year’s Booker winner and a teacher of creative writing, was praised for a story that ‘traverses strange landscapes and shady characters, as motivations are examined – and questions asked’. No one commented on James’s ability to ‘use a range of cohesive devices, including adverbials, within and across sentences and paragraphs’.

The dead hand of rigour decrees that we judge children’s ability to employ ‘passive and modal verbs mostly appropriately’. We have to check that they use ‘adverbs, preposition phrases and expanded noun phrases effectively to add detail, qualification and precision’. (Never mind thrilling, moving or frightening, I do love a story to be detailed, precise and qualified.) We forget to read what the children have actually written in the hunt for ‘inverted commas, commas for clarity, and punctuation for parenthesis [used] mostly correctly, and some correct use of semi-colons, dashes, colons and hyphens’. Finally, it goes without saying that young children must ‘spell most words correctly’.

There are eight criteria in the Government’s interim framework for writing at the ‘expected standard’ – expected by whom, one is tempted to ask. Only one of the eight relates to the point of putting pen to paper in the first place. Aside from ‘the pupil can create atmosphere, and integrate dialogue to convey character and advance the action’, the writing criteria spring entirely from the Government’s obsession with grammar, punctuation and spelling. I fear it is only too easy to meet the ‘expected standard’ with writing that is as lifeless, uninspiring and rigorous as the criteria themselves.

If writing is not to entertain and inform, then why bother? In the old days of levels, teachers had to tussle with Assessment of Pupil Performance Grids – a similar attempt to standardise the marking of a creative activity. But at least the APP grids acknowledged that good writing should make an impact. Texts should be ‘imaginative, interesting and thoughtful’. Sentence clauses and vocabulary should be varied not to tick a grammar checklist box but to have an ‘effect’ on the reader.

So now we have to knuckle down and make sure children’s writing satisfies the small-minded rigour revolutionaries. Can we slip in a semi-colon and a couple of brackets without spoiling the flow of a youthful reworking of an Arthurian legend? How many times can we persuade our young authors to write out their stories in order to ensure ‘most’ words are spelt correctly. And what to do about those blank looks when we suggest that they repeat a phrase from one paragraph to the next to ensure they have achieved ‘cohesion’?

Mrs Morgan claims the ‘tough’ new curriculum will foster a love of literature. This is a mad, topsy-turvy world that includes too many ‘strange landscapes and shady characters’. It is good, at last, that ‘motivations are examined – and questions asked’. Keep up the good work, everyone. We can stop the rigour revolutionaries.

Stephanie Northen is a primary teacher and journalist and one of our regular bloggers. She contributed to the Cambridge Primary Review final report and is a member of the Board of the Cambridge Primary Review Trust.

Filed under: assessment, Cambridge Primary Review, DfE, Sats, Stephanie Northen, tests

March 18, 2016 by Nancy Stewart

Baseline assessment – we’re not buying

Last autumn hundreds of thousands of four-year-old children, as diverse as any group pulled from the population, were each assigned a single number score which purported to predict their future progress in learning. They had been baselined.

They arrived in reception classes from school nurseries, from private or voluntary nurseries, from childminders, and from homes where they had no previous experience of early years provision.  They came from families awash with books, talk and outings, from families struggling with the economic and emotional pressures of life, and from troubled backgrounds and foster care.  Some were hale and hearty while others had a range of health conditions or special needs.  Some were fluent in English and some spoke other languages with little or no English.  And some were nearly a year older than the youngest, 25 percent of their life at that age. Yet with no quarter given for such differences, they were labelled with a simple number score within six weeks of arriving at school.

In the face of widespread and vehement opposition (including from CPRT’s directors) DfE had decided on baseline assessment not to support children’s learning, but as a primary school accountability measure for judging schools when these children reach Year 6. Unsurprisingly, recent press reports indicate that lack of comparability between the three approved schemes may mean the baseline policy will be scrapped – possibly in favour of a simple ‘readiness check’.  Frying pans and fires come to mind.

The fact is, there is no simple measure that can accurately predict the trajectory of a group of such young children. This is not to deny a central role for assessment.  Teachers assess children on arrival in a formative process of understanding who they are, what they know and understand, how they feel and what makes them tick, in the service of teaching them more effectively.

As CPRT has frequently pointed out in its evidence to government assessment reviews and consultations, the confusion of assessment with accountability results in a simplistic number score which ignores the range and complexity of individual learning and development, over-emphasises the core areas of literacy and maths required by the DfE, and places a significant additional burden on teachers in those important early weeks of forming relationships and establishing the life of a class.

Recent research by the Institute of Education into the implementation of baseline assessment confirms that teachers found the process added to their workloads, yet only 7.7 percent thought it was an ‘accurate and fair way to assess children’ and 6.7 percent agreed it was ‘a good way to assess how primary schools perform’.

What is the harm?  Aside from the waste of millions of pounds of public money going to the private baseline providers, there is concern about the impact of the resulting expectations on children who receive a low score within their first weeks in school, and who may start out at age four wearing an ‘invisible dunce’s cap’. CPRT drew attention to this risk in its response to the accountability consultation, saying ‘Notions of fixed ability would be exacerbated by a baseline assessment in reception that claimed to reliably predict future attainment.’ For children whose life circumstances place them at risk of low achievement in school, being placed in groups for the ‘slower’ children and subjected to an intense diet of literacy and numeracy designed to help them ‘catch up’ will deny them the rich experiences that should be at the heart of their early years in school to provide them with the foundation they need.

Better Without Baseline echoes CPR’s statement on assessment in its 2010 list of policy priorities for the Coalition government. CPR urged ministers to:

Stop treating testing and assessment as synonymous … The issue is not whether children should be assessed or schools should be accountable – they should – but how and in relation to what.

Unfortunately policy makers are seduced by the illusion of scientific measurement of progress, using children’s scores to judge the quality of schools.  Yet there are more valid ways to approach accountability.  Arguing for a more comprehensive framework, Wynne Harlen said in her excellent research report for CPRT:

What is clearly needed is a better match between the standards we aim for and the ones we actually measure (measuring what we value, not valuing what we measure). And it is important to recognise that value judgements are unavoidable in setting standards based on ‘what ought to be’ rather than ‘what is’.

Baseline assessment is not a statutory requirement, and this year some 2000 schools decided not to opt in; that remains a principled option for the future.  We can hope that government will think again, and remove the pressure on schools to buy one of the current schemes.   What is needed is not a quick substitute of another inappropriate scheme such as a ‘readiness check’, but a full and detailed review of assessment and accountability from the early years onward, where education professionals come together to discuss and define what matters.  The aim should be to design a system of measurement that is respected, useful and truly supports accountability not only for public investment but most importantly to the learners we serve.

Nancy Stewart is Deputy Chair of TACTYC, the Association for Professional Development in Early Years.

Filed under: assessment, baseline assessment, Cambridge Primary Review Trust, curriculum, DfE, early years, evidence, Nancy Stewart, policy, tests

February 26, 2016 by Stephanie Northen

Boycott the Sats

Schools are scrambling to prepare children for the new Sats tests to be taken in May. Teachers who have never before ‘taught to the test’ are gloomily conceding that the Government has left them no other option. Children are being drilled in the mechanics of adverbial clauses and long division, forced to spell vital words such as ‘pronunciation’ and ‘hindrance’, and helped to write stories showing their mastery of the passive voice and modal verbs. Headteachers shake their heads sorrowfully. Advisers grimace and say ‘nothing to do with us’. None of this should be happening – and finally there is just a possibility that the madness will stop.

First, came the suggestion of a boycott of baseline testing. Now, at last, there is a call to cancel the KS2 Sats. I watch the signatures grow daily (hourly) hoping they represent the moment when the classroom worms such as myself finally turned and said no.

The only word on the spelling list that Year 6s should learn is ‘sacrifice’ as this will enable them to write to the Education Secretary pointing out that their learning is being sacrificed to a mean-spirited and regressive assessment system.

The new tests and assessments have been designed, not to discover and celebrate what children can do, but to catch them out. Here are just a few examples. It is essential, says the Government, that KS1 children are taught maths using concrete and visual aids such as Numicon and number squares.  So which callous wretch decreed that the same children must be deprived of these props when taking their Sats? In my school, children wept because they could see the number squares but were not allowed to use them.

I’d also like to meet the sour-faced creators of the sample grammar test who asked Year 6 children to add suffixes to nouns to create adjectives (clearly a life skill) and then decided that getting five out of six right merited no marks. Likewise I’d love to shake hands with the generous soul who recently decreed nul points for any child misplacing a comma when separating numbers in the thousands in their KS2 maths Sats. Similarly, there’s no mercy for the left-handed child who struggles to join up his handwriting or for the fast-thinking kid who writes brilliantly but forgets her full stops.

Assessment should, as CPRT recommends, ‘enhance learning as well as test it’ and ‘support rather than distort the curriculum.’

Assessment in the CPRT spirit ensures that the children in my class are set a range of appropriate challenges every day. As a consequence, they are all generally making cheerful progress, though some more rapidly than others and some have greater strengths in one area of learning than another. Yes, I’m sorry to say, they are inconsistent: they have been known to go backwards and they even make mistakes. In other words, they are human beings.

The new tests have not been designed with humans in mind, let alone small humans. Rather they have been created by cyborgs for baby cyborgs. If you don’t believe me, watch the bizarre 2016 KS2 assessment webinar from the Standards and Testing Agency.

The STA cyborgs explain why it was necessary to ditch the ‘best fit’ model of levels where teachers, heaven forbid, used their professional judgement to decide if a child had ticked enough boxes to be awarded a level 3c. According to the cyborgs, parents were confused by their 3c kid being able to do, or not do, things that their mate’s 3c kid could or couldn’t do. Clearly in cyborg land, parents had nothing better to do than check that all children assessed at the same level had exactly the same set of skills and knowledge. This is just so ridiculous as to be laughable were it not for what is currently happening in the nation’s classrooms.

In terms of teacher assessment, best fit has been replaced by perfect fit. Now we have to tick all the boxes in order to judge children to be working at the ‘expected standard’. If just one box cannot be ticked, children are classified as ‘working towards’. However, it doesn’t end there. If a child doesn’t tick all those boxes, they will cascade back down the (not) levels potentially all the way to Year 1. Take writing as an example. One of the best young writers I have taught would not qualify even as ‘working towards’ because she didn’t join up her handwriting. Likewise imaginative but dyslexic 10-year-olds will slip down and down because they can’t spell words supposedly appropriate for eight-year-olds such as ‘occasionally’, ‘reign’ or ‘possession’. And, according to the STA cyborgian webinar, there is ‘no flexibility’ on this.

Yet not only is it inflexible, mean spirited and regressive, it is also just not fair. The current Year 6 has only had two years at most to prepare for this newly punitive world of harder work and pitiless mark schemes. And make that only three months in terms of writing where standards have been wrenched up from a level 4b to a 5c. Parents do not know what harsh judgements await their children this summer. If they did, chances are they would support a boycott. As Warwick Mansell, writing in a CPRT blog back in 2014, wisely commented:

The single ‘working at national standard’ – or not – verdict, where it is to be offered, also seems to invite a simple ‘pass/fail’ judgement. This, it is hard to avoid thinking, will set up the view among many children that they are failures at an early age.

And not only the children. Teachers are under tremendous pressure to ensure their pupils reach the new standards in reading, maths, grammar and writing – irrespective of individual strengths or weaknesses. Yet we are supposed to achieve this without allowing these standards ‘to guide individual programmes of study, classroom practice or methodology’ as the STA disingenuously insists. Sometimes I wonder if I’m stupid or perhaps was asleep when a new era of Orwellian doublethink dawned. How on earth do children learn to ‘use passive and modal verbs mostly appropriately’ if I don’t explicitly teach them? How do they learn to add, subtract or multiply fractions without that being a programme of study? Perhaps in some Utopian classroom there is a lesson plan – presumably in something tasteful like quilting – that miraculously transfers this knowledge to children in such a form that they can pass their Sats and meet the ‘expected standard’.

But until someone lends me that plan, I’d rather we all just said no.

Stephanie Northen is a primary teacher and journalist. She contributed to the Cambridge Primary Review final report and is a member of the Board of the Cambridge Primary Review Trust.

Filed under: assessment, Cambridge Primary Review Trust, DfE, Sats, Stephanie Northen, tests

February 19, 2016 by Robin Alexander

An ideological step too far

Secretary of State Nicky Morgan is reportedly looking to recruit the next head of Ofsted from the United States.

Even if she were to locate, with due objectivity and rigour (words much used by ministers but seldom exemplified in their actions), a variety of American educators with the requisite expertise and professional standing, her quest would be perplexing. For it would signal that no home-grown British talent can match that imported from an education system which reflects a national culture very different from ours, is mired in controversy, and, though it has individual teachers, schools and school districts of matchless quality, performs as a system below the UK on international measures of pupil achievement.

But that is not all. A check on the touted names makes it clear that the search is less about talent than ideology.  The reputation of every US candidate in which the Secretary of State is said to be interested rests on their messianic zeal for the universalisation of charter schools (the US model for England’s academies), against public schools (the equivalent of our LA-maintained schools), and against the teaching unions.  This, then, is the mission that the government wants the new Chief Inspector to serve.

Too bad that the majority of England’s primary schools are not, or not yet, academies. Too bad that Ofsted, according to its website, is supposed to be ‘independent and impartial’; and that Her Majesty’s Chief Inspector of Education, Children’s Services and Skills is required to report to Parliament, not to the political party in power; and that he/she must do so without fear or favour, judging the performance of all schools, whether maintained or academies, not by their legal status or political allegiance but by the standards they achieve and the way they are run.  Too bad that on the question of the relative efficacy of academies and maintained schools the jury is still out, though Ofsted reports that while some truly outstanding schools are academies, many are not. And too bad that the teaching unions are legally-constituted organisations that every teacher has a right to join and that, by the way, they have an excellent track record in assembling reliable evidence on what works and what does not.

When we consider the paragons across the pond who are reportedly being considered or wooed in Morgan’s search for Michael Wilshaw’s successor, mere ideology descends into dangerous folly. One of them runs a charter school chain in which the brutal treatment of young children in the name of standards has been captured on a video that has gone viral. Another leads a business, recently sold by the Murdoch empire (yes, he’s there too), that having failed to generate profits in digital education is now trying to make money from core curriculum and testing. A third is the union-bashing founder of a charter school chain that has received millions of dollars from right-wing foundations and individuals but whose dubious classroom practices have been exposed not just as morally unacceptable but, in terms of standards, educationally ineffective. A fourth, yet again a charter chain leader, has published a proselytising set text for the chain’s teachers tagged ‘the Bible of pedagogy for no-excuses charter schools’ that, according to critics, makes teaching uniform, shallow, simplistic and test-obsessed. Finally, the most prominent member of the group has been feted by American and British politicians alike for ostensibly turning round one of America’s biggest urban school systems by closing schools in the teeth of parental protest, imposing a narrow curriculum and high stakes tests, and making teacher tenure dependent on student scores; yet after eight years, fewer than a quarter of the system’s students have reached the ‘expected standard’ in literacy and numeracy.

As head of Ofsted, every one of these would be a disaster. As for the US charter school movement for which such heroic individuals serve as models and cheerleaders, we would do well to pay less attention to ministerial hype and more to the evidence. In England we are familiar with occasional tales of financial irregularity and faltering accountability, and of DfE using Ofsted inspections to bludgeon academy-light communities into submission. But this is as nothing compared with the widely-documented American experience of lies, fraud, corruption, rigged student enrolment, random teacher hiring and firing and student misery in some US school districts and charters, all of which is generating growing parental and community opposition.  Witness the Alliance to Reclaim Our Schools and this week’s nationwide ‘walk-in’ in defence of  public education.  Yet the culture that American parents, teachers, children and communities are combining to resist is the one the UK government wishes, through Ofsted, to impose. Ministers believe in homework: have they done their own?

However, as prudent fallback Nicky Morgan is said to have identified five British candidates. While these don’t hail from the wilder shores of US charter evangelism, their affiliations confirm the mission ‘to make local authorities running schools a thing of the past’ (Prime Minister Cameron last December), and, to avoid any lingering ambiguity, ‘The government believes that all schools should become academies or free schools’ (from the DfE website).

In pursuit of this agenda, the reported British candidates have immaculate academy and/or Teach First credentials (Teach First is the British teacher training cousin of the evangelistic Teach for America, like charter schools an essential part of the package of corporate reform). Most take home eye-watering salaries. All are within the inner ministerial circle of school leaders whose politically compliant views are rewarded with access, patronage, gongs, and seats on this or that DfE ‘expert group’ whose job is to dress up as independent advice what the government wishes to hear.

Home-spun this second list may be, but it is hardly likely to meet the Ofsted criterion of ‘independent and impartial.’

It should not be like this, and it does not need to be. Like the United States, England has many more outstanding schools, talented teachers and inspirational educational leaders than those few who are repeatedly praised in party conference speeches and with which ministers assiduously pack their ‘expert groups’.  The talent worthy of celebration and reward is not located exclusively in academies or Teach First any more than in individual schools it resides solely in the office of the head (for these days rank and file teachers barely merit a mention even though without their unsung dedication and skill all schools would be in special measures).

The problem with the much longer list of potential candidates for the top Ofsted post is that those who ought to be on it – and they come from maintained schools, academies and other walks of life – don’t necessarily toe the ministerial line. They are not, in Thatcher’s still resonant words, ‘one of us’. Such independent-minded and genuinely talented people may conclude from inspection or research evidence that flagship policy x, on which minister y’s reputation depends, isn’t all it is cracked up to be. They put children before their own advancement. They dare to speak truth to power.

Yet isn’t this exactly what an ‘independent and impartial’ Ofsted is required to do, and what, give or take the odd hiatus, most HM Chief Inspectors have done – so far? And isn’t it exactly what a genuine democracy needs in order that well-founded policies gain a hearing, ill-founded policies are abandoned before they do lasting damage, and the education system is ‘reformed’ in the ameliorative sense rather than merely reorganised as part of the latest ministerial vanity project?

But no, for by politicising public education to the extent heralded by the 1988 Education ‘Reform’ Act and entrenched ever more deeply by each successive government since then, ministers are signalling that power matters more than improvement, compliance more than honesty, dogma more than reasoned argument; and that in the battle between ideology and evidence – a battle in which the Cambridge Primary Review and CPRT have been strenuously engaged for the past ten years, often to their cost – ideology trumps every time. The government’s attempt to ‘fix’ the agenda of England’s independent inspectorate by appointing one of its own persuasion as chief inspector is not just an ideological step too far. It is an indefensible abuse of political power.

Talking of Trump, is he on Nicky Morgan’s bucket list too?  Go on, Secretary of State – in for a penny, in for a trillion dollars.

www.robinalexander.org.uk

If you would like to learn more about educational ‘reform’ in the United States, try the blogs of Diane Ravitch  and Gene Glass, and recent books by Ravitch and Berliner and Glass. For a catalogue of US charter school irregularity see Charter School Scandals.  For Jeff Bryant’s reflections on this week’s ‘walk-ins’ in support of US public schooling, click here.

Filed under: academies, assessment, Cambridge Primary Review Trust, charter schools, DfE, England, evidence, inspection, Ofsted, Robin Alexander, United States

November 20, 2015 by Marianne Cutler

Life after levels

Since September 2015, national curriculum levels are no longer being used for statutory assessments in schools in England. Schools are now required to develop new approaches to their own in-school assessment and this provides welcome opportunities for evolving purposeful assessment. But for many schools, gearing up for life after levels involves a step change in approach, and the challenges should not be underestimated.

The final report of the Commission on Assessment without Levels (September 2015)  reminds us of the principles and purposes of assessment: (i) in-school formative assessment, (ii) in-school summative assessment and (iii) nationally standardised summative assessment. The report provides helpful guidance on writing school assessment policies, and raises important questions for teachers and school leaders when they consider data collection and reporting – what uses the assessments are intended to support, the quality of the assessment information, the frequency for collecting and reporting, and the time required to record the information – noting that much of teachers’ time that could be better spent in classrooms is unnecessarily taken up with data management systems. I am sure we could all agree with that.

Of course for many schools, meaningful assessment has been a priority for a long time. Iain Erskine, principal at Fulbridge Academy, a CPRT alliance school, provided some detail on their approaches to assessment in his February blog  and vice principal, Ben Erskine, has expanded on this for their approach to science:

Children pursue and investigate projects each term that are linked to the topic theme they are studying. Each project has realistic and creative links that allow for opportunities to apply their learning in a real sense, learn the science involved, use their enquiry skills, as well as having some kind of design and technology element. At Fulbridge we teach science and technology as one lesson twice a week. Each term has either a biology, chemistry or physics focus and within this focus, the scientific enquiry and the design and technology curriculum areas are taught. Children are then assessed each term against the areas of science (and technology) that has been covered and their confident use of scientific and technical language.

The approach at Fulbridge chimes with the Nuffield Foundation’s Developing policy, principles and practice in primary school science assessment report in 2012, which was led by Professor Wynne Harlen and sets out a proposed framework for the assessment of science in primary schools. The framework (illustrated as a pyramid model on page 21) describes how evidence of pupils’ attainment should be collected, recorded, communicated and used. The report details how assessment data can be optimised for different uses and outlines the support needed to implement the procedures.

So what could this look like in practice? A follow-on initiative, the Teacher Assessment in Primary Science (TAPS) project is taking place from 2013 – 2016. TAPS is funded by the Primary Science Teaching Trust (PSTT) and is based at Bath Spa University, which co-hosts CPRT’s south-west regional network. This initiative has developed the pyramid model for the flow of assessment information through a school and operationalised it into a whole school self-evaluation tool to support schools in identifying strengths and weaknesses in their assessment systems, and to provide an exemplified model of good practice.

Sarah Earle, TAPS project lead, comments:

Schools working with the TAPS team have stepped back from tracking systems to look at what would make a difference to children’s learning. They have explored a wide range of ways to elicit, focus and record children’s ideas to develop more valid assessments, and have taken part in moderating discussions to support reliability of teacher assessment. These discussions need to continue to support a shared understanding across the school, with both a new curriculum and new assessment guidance – in the form of the interim teacher assessment framework for 2015-16 at the end of Key Stage 1 and Key Stage 2. Most subject leaders are endeavouring to maintain a focus on working scientifically and on assessment for learning rather than being driven by tracking systems.

Sarah shared some case studies from this project in an article ‘An exploration of whole-school assessment systems’ published in the January/February edition of Primary Science. The case studies described different approaches to assessment but identified a shared number of features of good practice: assessment is embedded in the planning process; children are encouraged to take responsibility for their own learning; assessment is ongoing; and there is a clear understanding of ‘what good science looks like’ across the school.

For example, a key focus for assessment at Northbury Primary School is the elicitation of children’s ideas. Units of work are in outline form, each beginning and ending with a thought shower which allows both children and teachers to see progress at the end of the unit, but perhaps more importantly this gives the teacher a starting point for planning. Detailed plans are not completed in advance which allows lessons to take into account initial questions raised by the children and their starting points. This is particularly important as pupil mobility is high.

Assistant Northbury headteacher and science co-ordinator, Kulvinder Johal, comments:

The TAPS pyramid model has been useful from its inception. The first draft, which I was privy too, helped me to gauge what we were doing well and where our gaps were. The gaps will vary from school to school as we are all strong in different areas. Our gap was in identifying next steps in learning. Coupled with some of the key messages from the Ofsted Maintaining curiosity: science education in schools 2013 report , I realised we needed to set science targets for our pupils, much as we did and do in literacy and numeracy. Now our pupils have science targets that they work towards and that they assess themselves against. Having made progress in this area, we then returned to the TAPS pyramid to see where our focus should be and we realised we needed to do more moderating of science work and so we are beginning to address this issue. The TAPS pyramid leads us to better practice, improvements and new challenges, and is a really useful working document.

Shaw CE Primary School has also used the TAPS pyramid model as a focus for improving their approach to assessment as Carol Sampey, Deputy Head and Fellow of the PSTT, notes:

It has been good to be one of the 12 schools who have worked together with Bath Spa to help develop this tool. In my school, we have discussed what good assessment for learning looks like and last year we used the TAPS pyramid as a generic teaching and learning tool when doing lesson observations. Our view was that if teachers were aware of and then incorporating the ongoing formative assessment strategies in all of their teaching, learning would improve across the board.  Teachers found this helpful and these strategies became a focus of performance management last year. Practice has improved as a result. It has also been helpful in making our teachers more aware of how to involve the pupils in self – assessment. A next step is to begin to use the TAPS pyramid as a resource tool – to try out focus assessment tasks and to look at what other schools have been doing. I am also going to introduce the TAPS pyramid to other schools in our science cluster.

This is an encouraging picture, and for these schools there certainly is life after levels, and not just for science. Since the TAPS pyramid is based on good practice in formative assessment, then many of the examples, such as peer assessment, are relevant across the primary curriculum. The structure could be used for any system of teacher assessment where validity is supported by using a range of information from the classroom and reliability is supported by moderating discussions.

There will be many different approaches to assessment by primary schools across the country in making the most of the opportunities presented by the removal of levels. Sharing ideas, plans and best practice at this time is particularly helpful, and I invite you to share these through CPRT and elsewhere.

Marianne Cutler is Co-Director of the Cambridge Primary Review Trust and Director of Curriculum Innovation at the Association for Science Education.

Assessment reform is one of CPRT’s eight priorities.  CPRT encourages approaches to assessment ‘that enhance learning as well as test it, that support rather than distort the curriculum and that pursue standards and quality in all areas of learning, not just the core subjects.’

See also CPRT’s research report and briefing Assessment, Standards and Quality of Learning in Primary Education by Wynne Harlen (2014)

Filed under: assessment, assessment without levels, Cambridge Primary Review Trust, curriculum, Marianne Cutler, science

Next Page »

Contact

Cambridge Primary Review Trust - Email: administrator@cprtrust.org.uk

Copyright © 2025