The Cambridge Primary Review Trust

Search

  • Home
    • CPRT national conference
    • Blog
    • News
  • About CPRT
    • Overview
    • Mission
    • Aims
    • Priorities
    • Programmes
    • Priorities in Action
    • Organisation
    • People
      • National
    • Professional development
    • Media
  • CPR
    • Overview
    • Remit
    • Themes
    • Themes, Perspectives and Questions in Full
    • Evidence
    • People
    • CPR Publications
    • CPR Media Coverage
    • Dissemination
  • Networks
    • Overview
    • Schools Alliance
  • Research
    • Overview
    • CPRT/UoY Dialogic Teaching Project
    • Assessment
    • Children’s Voice
    • Learning
    • Equity and Disadvantage
    • Teaching
    • Sustainability and Global Understanding
    • Vulnerable children
    • Digital Futures
    • Demographic Change, Migration and Cultural Diversity
    • Systemic Reform in Primary Education
    • Alternative models of accountability and quality assurance
    • Initial Teacher Education
    • SW Research Schools Network
    • CPR Archive Project
  • CPD
  • Publications
  • Contact
    • Contact
    • Enquiries
    • Regional
    • School
    • Media
    • Other Organisations

October 21, 2016 by David Reedy

Assessment, testing and accountability: a suggestion for an alternative framework

The data from the new 2016 tests for 11 year olds in England is gradually trickling out. We have been informed that 48 percent of the children did not reach the new expected standards in reading, writing and mathematics combined (compared to 35 percent in 2015 under the old system) and are at risk of being labelled ‘failures’. In addition, the calculations have been done to identify each Y6 child’s scaled score and progress measure. Parents have been told something like ‘In reading your child got 99 on the scaled score against the expected standard and 1.6 progress score’. Not terrifically helpful, particularly if the parent has become familiar with Levels over the last 28 years.

Combined with the anecdotal evidence about the problems children had with the reading test, and the abandonment of the grammar test for seven year olds after it was inadvertently leaked, it is no surprise that more and more educationists, parents and organisations are calling for a fundamental review.

I have written in previous blogs about the current system and its shortcomings, now exacerbated by the 2016 experience, drawing on Wynne Harlen’s 2014 research report for CPRT Assessment, Standards and Quality of Leaning In Primary Education which outlines the evidence concerning the impact of high stakes testing and compares England’s system with those of a number of other countries. Harlen’s key point that ‘the system …. for primary schools in England still suffers from over-dependence on testing and the use of end of Key Stage 2 tests for too many purposes’ (p. 32) indicates that we must consider a fundamentally different approach .

In this blog I outline the key strands which I think would need to be considered  under any review, with some suggestions concerning what should be incorporated, based on the available evidence.

The three strands for a comprehensive system of assessment and accountability are at individual child level, school level and national level.

At individual child level the focus must be assessment for learning and assessment of learning (i.e. formative and summative assessment). Assessment must be used to help children while they are learning and to find out what they have learned at a particular point in time.  Testing can be a part of this as it can inform overall teacher assessment and help to identify any potential gaps in learning.  However tests cannot give all the information needed to take a rounded view of what children need to learn and what they know and can do. As Harlen states: ‘the evidence shows that when teachers draw on a range of sources of evidence, then discuss and moderate with other teachers, assessment is more accurate’. Depending on the score from an externally marked, single test of reading at 11, for example, to identify reading ability is simply not enough evidence to make a reliable judgement.

As a first move in this direction, the system currently used for seven year olds should be adopted at the end of KS2; teacher assessment based on a range of evidence, including but not determined by a formal test.

In addition the plethora of evidence-based assessment resources available should be utilised to underpin an approach that is qualitative as well as quantitative. For example there are the CLPE/UKLA et al Reading and Writing Scales which can be used for identifying children’s progress as well as indicating next steps for learning. It is also worth looking at the end of each of these scales where there is an extensive bibliography showing how they are firmly based in research evidence. Something DfE might consider doing.

In summary, the principle that assessment of any kind should ultimately improve learning for children is central and should be the criterion against which all assessment practices in and beyond school should be judged.

At school level the focus must be on partnership in assessment as well as accountability. Firstly, that means not only being accountable to parents and the local community the school serves, but also working systematically with them as partners.

Parents have a key role to play in assessment which goes beyond being regularly reported to and includes the sharing of information about the progress of their children both within and beyond school to obtain a fully informed picture. This would be followed by discussions concerning what the school is doing more generally to promote learning across all aspects of learning.

Schools should hold themselves to account through systematic self evaluation. This self evaluation should be externally moderated by local partners, crucially through strengthened local authorities, and nationally through a properly independent HMI. However the system should not feel, as it does to many schools under the current arrangements, as punitive, but developmental and supportive, including when a school is not doing as well as it should.  Any moderated self evaluation should be formative for the school as well as demonstrating accountability.

CPRT responded by making assessment reform one of its eight priorities, aiming to

Encourage approaches to assessment that enhance learning as well as test it, that support rather than distort the curriculum and that pursue standards and quality in all areas of learning, not just the core subjects.

CPRT’s Priorities in Action assessment webpage lists our multifaceted response to this priority including reports, briefings, blogs, parliamentary and government submissions and purpose-designed CPD for schools.

The final report of the Cambridge Primary Review was also clear that inspection needed to change (p. 500) and recommended that a new model be explored which focussed much more on classroom practice, pupil learning and the curriculum as a whole.

In any review of assessment, the accountability system must be reviewed at the same time. That goes for accountability at national level too.

Current arrangements at primary level are both narrow, only focusing on some aspects of core subjects, and useless for making comparisons across time as the criteria and tests keep changing. A system of sample surveys should be formulated to monitor national standards. These would be based on a large number of components and be able to extend well beyond the core subjects if a rolling programme was organised. England would then be able to judge whether primary education as a whole, in all its aspects, based on a comprehensive set of aims, was being successful and was improving over time. Currently this is impossible to do.

Thus is not surprising that more and more people and organisations are, alongside CPRT, calling for a fundamental review of assessment, testing and accountability and that a major campaign is about to get underway. This campaign is to be called ‘More than A Score’ and a major conference has been announced for December 3rd. CPRT fully supports this campaign.

This move to a more effective approach would not be a simple process. As CPR’s final report stated in 2010 ‘Moving to a valid, reliable and properly moderated procedures for a broader approach to assessment will require careful research and deliberation’ (p. 498)

It will take some time, but I believe, for all involved, it will be well worth the effort.

Just as this blog was being prepared, Justine Greening, Secretary of State for Education, made an announcement about primary school assessment. This included a commitment to ‘ setting out steps to improve and simplify assessment arrangements’, the abandonment of Y7 resits, and no new tests to be introduced before the 2018/19 academic year. There is a welcome acknowledgement in the tone of the statement that current arrangements are not working, although the last point has alarming implications about the introduction of further, unnecessary, high stakes tests.

The Secretary of State also announced another consultation, to take place next year, on assessment, testing and accountability. We have seen many of these so called ‘consultations’ before where the views of educationalists and the evidence from research and experience have been completely ignored.

Another ‘consultation’ is not needed, What is needed is a thorough, independent, review where all stakeholders are represented and a government that is prepared to listen and respond positively.

David Reedy is a co-Director of the Cambridge Primary Review Trust, and General Secretary of the United Kingdom Literacy Association.

Filed under: accountability, assessment, Cambridge Primary Review Trust, David Reedy, DfE, England, inspection, tests

July 15, 2016 by Warwick Mansell

Academies: statisticians need to raise their game

Two major reports on the effectiveness of the government’s central education policy – turning schools into academies, preferably in chains – have been published in the past two weeks. But do they get to the truth of the policy? Not remotely, I think. I say that even though the reports serve a useful public interest function in holding ministers to account.

The central problem with these reports is that they see the success or not of the academies scheme entirely through the lens of the test and exam results either of individual institutions, or of institutions grouped together in chains or, more loosely, in local authorities. Although this approach purports to offer an ‘objective’ insight into the quality of academies, and by extension the success of the policy itself, in fact it has some serious problems.

The methodology

The two studies I highlight here are, first, one for the Sutton Trust charity, called Chain Effects: the impact of academy chains on low-income students. This is the third in a series which seeks to gauge the success of multi-academy trusts (MATs) by the exam results of disadvantaged pupils on their books. The second, School Performance in multi-academy trusts and local authorities – 2015, is an analysis of results in academy and local authority schools published by a newly-named think tank, the Education Policy Institute (EPI).

The Sutton Trust study produces five exam result measures for 39 MATs, all using the results of each of ‘their’ disadvantaged pupils to pronounce on how well each chain does for these pupils. The EPI paper offers a verdict on the overall performance of academy chains, this one using two exam result measures for pupils which count in official DfE statistics as being educated in these chains.

Both studies, which are statistically much more impressive, say, than a DfE press release – though that may be setting the bar very low indeed – found that the chains varied considerably in terms of their ‘performance’. They therefore garnered media attention for some findings which will not have been welcomed by ministers.

The reports may also be invaluable in another sense. Ministers – and this seems likely to remain the case even with Justine Greening replacing Nicky Morgan as Secretary of State – tend to justify their academies programme largely in terms of institutional exam results. If research considers the academies project on ministers’ own terms and raises serious questions, then that is an important finding.

Problems: teaching to test and inclusion

However, there are two main problems. The first is well-known. It is simply that focusing on exam results as the sole arbiter of success may tell us how effective the institution is at concentrating on performance metrics, but not much about other aspects of education. It may encourage narrow teaching to tests.

Despite the multiple measures used, both of these reports seem to encourage one-dimensional verdicts on which are the ‘best’ academy trusts: the ones which manage to see the pupils who are included in the indicators which the research uses – in the case of the Sutton Trust research, disadvantaged pupils, and in the EPI study, pupils as a whole – achieving the best results.

Yet the reality, it seems to me, is much more complex. A prominent academy chain, which runs schools near where I live, has been known to do well in statistical assessments of its results. Yet some parents I speak to seem not to want to go near it, because of a hard-line approach to pupil discipline and a reportedly test-obsessed outlook. This may generate the results prized in studies such as these, but are these schools unequivocally better than others? I think researchers should at least acknowledge that their results may not be the final word on what counts as quality. My hunch is that these studies may be picking up on academy trusts which are more successful in managing the process of getting good results for their institutions. But is that the same as providing a generally good, all-round education for all those they might educate? The reports offer no answers because they are purely statistical exercises which do not investigate what might be driving changes in results. So we need at least to be cautious with interpretation.

This is especially the case when we move on to perhaps the less obvious concern about these studies. It is that both investigations focus entirely on results at institutional level, counting the success of schools in getting good results out of those pupils who are on their books at the time the statistical indicators are compiled. However, this ignores a potentially serious perverse incentive of England’s results-based, increasingly deregulated education system.

The studies seem entirely uncurious about what is often put to me, by observers of its effects on the ground, as a very serious risk inherent in the academies scheme as currently understood. This is that in deregulating such that each academy trust is given a degree of autonomy, coupled with the pressure on each trust to improve its results, a perverse incentive is created for trusts to become less inclusive.

In other words, they either use admissions to take on more pupils who are likely to help their results, or they try to push out students who are already on their books but less likely to help their results. This concern is referenced in the research review I carried out for CPRT. This quotes a finding from the Pearson/RSA 2013 review of academies which said: ‘Numerous submissions to the Commission suggest some academies are finding methods to select covertly’. The commission’s director was Professor Becky Francis, who is a co-author of the Sutton Trust study, so it is surprising that the latter paper did not look at changing student composition in MATs.

A statistical approach summing up the effectiveness of individual academy chains entirely through the results of individual chains without any way of checking whether they are becoming more selective does not address this issue.

I admit, here, that I have more reasons to be concerned at the secondary, rather than at the primary, level. Since 2014, I have carried out simple statistical research showing how a small minority of secondary schools have seen the number of pupils in particular year groups dropping sharply between the time they arrive in year seven and when they complete their GCSEs, in year 11.

Indeed, one of the top-performing chains in both these reports – the Harris Federation – has recently seen secondary cohort numbers dropping markedly. Harris’s 2013 GCSE year group was 12 per cent smaller than the same cohort in year 8. The 2015 Harris GCSE cohort was 8 per cent smaller than when the same cohort was in year 7. This data is publicly available yet neither report investigates shrinking cohort size. That is not to say anything untoward has gone on – Harris is also very successful in Ofsted inspections, and has said in the past that pupils have left to go to new schools, to leave the UK or to be home-educated – but it certainly would seem worth looking into.

When the Sutton Trust study mentions ‘[academy] chains that are providing transformational outcomes to their disadvantaged pupils’, its figures are based only on those actually in the chains in the immediate run-up to taking exams. Would the analysis change if it included all those who started out at the schools? We don’t know. The fact that DfE data is available suggesting major changes in pupil cohorts but it seems not to have been looked at is remarkable.

In addition, the fact that high-profile research studies purporting to show the success of organisations are not considering alternative readings of their statistics may incentivise organisations not to think about students which they may consider to be harder to educate. Results measures currently provide an incentive to push such students out.

The lack of curiosity is extra surprising, given that the issue of ‘attrition rates’ – schools losing students – has been live in the debate over the success of one of the largest charter school operators in the US, KIPP schools.

As I’ve said: I don’t think this is just a secondary school issue. It is also a potential problem for any research which seeks to judge the success of primary academies solely with reference to the test results of pupils who remain in schools at the time of calculation of ‘performance’ indicators.

For, with reference to the academies scheme in general, as a journalist delving into goings-on at ground level, I frequently come across claims of schools, for example, not being keen to portray themselves as focusing on special needs pupils – and therefore not to attract such youngsters in the first place – or even trying to ease out children who might present behavioural challenges.

These two reports paint a simple picture of ‘more effective’ and ‘less effective’ academy chains. But the reality I see, based on both published evidence and many conversations on the ground, is rather different. I see a system which incentivises leaders to focus on the need to generate results that are good for the school. But is that always in the best interests of pupils? Should a school which sees rising results, but which also seems to be trying to make itself less attractive to what might be termed harder-to-educate pupils, be seen as a success?

These are very important questions. Sadly, the reports provide no answers.

This is the latest in a series of CPRT blogs in which Warwick Mansell, Henry Stewart and others have tested the government’s academies policy, and the claims by which it is so vigorously pursued, against the evidence. Read them here, and download Warwick’s more detailed CPRT research report Academies: autonomy, accountability, quality and evidence.

Warwick has also written extensively about the side-effects of results pressures in schools, most notably in his book ‘Education by Numbers: the tyranny of testing’ (Methuen, 2007).

Filed under: academies, assessment, Cambridge Primary Review Trust, equity, evidence, policy, school effectiveness, tests

May 27, 2016 by David Reedy

Time for radical change: grammar testing in England’s primary schools  

It has not been a good couple of weeks for testing in England’s primary schools.

There have been leaks of both the KS1 and KS2 spelling, grammar and punctuation tests, leading to the KS1 test being scrapped for this year and accusations by ministers that malign forces are at work to undermine the government’s education reform process.

Baseline assessment for four year olds has also gone, as its unreliability for accountability purposes became so obvious that continuation became untenable. (Not that the problems with testing and accountability are unfamiliar to teachers or parents, as Stephanie Northen and Sarah Rutty reminded us in their powerful recent blogs).

Even before his problem with subordinating conjunctions, Nick Gibb was complaining about the current situation in a speech at the ASCL curriculum summit on 27 April:

You do not need me to tell you that the implementation of the new key stage one and key stage two tests has been bumpy, and I and the department are more than willing to accept that some things could have been smoother. The current frameworks for teacher assessment, for example, are interim, precisely because we know that teething problems that exist in this phase of reform need to leave room for revision.

‘Teething problems’ is a bit of an understatement.

The Cambridge Primary Review Trust is committed to looking at what the widest range of available evidence tells us about assessment and assessment reform, including from experience such as Stephanie’s and Sarah’s as well as formal research, and it argues that decisions should be made at both policy and classroom level based upon that evidence.

I want to briefly look at the research evidence on the grammar tests for seven and eleven year olds and the government’s claims for them, to complement and add to the blogs of the last fortnight.

Nick Gibb argued in his ASCL speech, as well as on earlier occasions, that testing is a way of raising standards in the core areas of reading, writing and mathematics. He said:

Against those who attack the underlying principle of these reforms, I stand firm in my belief that they are right and necessary. Our new tests in grammar, punctuation and spelling have been accused by many in the media of teaching pupils redundant or irrelevant information. One fundamental outcome of a good education system must be that all children, not just the offspring of the wealthy and privileged, are able to write fluent, cogent and grammatically correct English.

He thus conflates performance in these tests with writing fluently and cogently. But the evidence that a test will help the children to get better at writing when it asks six and seven year olds to identify an adverb in ‘Jamie knocked softly on his brother’s bedroom door’ or to decide whether ‘One day, Ali decided to make a toy robot’ is a question, statement, command or an exclamation, simply doesn’t exist. The experience of this year’s Y2 and Y6 children, before the requirement to do the Y2 test was dropped, was in many cases, that of separate grammar lessons where they were trained for the test, making sure they could identify word classes and sentence types through decontextualised exercises, so that they would be able to answer questions like these. If the test is reintroduced in 2017 this will happen again, distorting the curriculum with little or no benefit to pupils.

I make this claim because the research evidence over many years is unequivocal. Debra Myhill, who with her colleagues at Exeter University has extensively investigated the teaching of grammar and has shown that explicit attention to grammar in the context of ongoing teaching can help pupils to improve their writing, summarised that evidence in an April 2013 TES article. She wrote:

I did a very detailed analysis of the test and I had major reservations about it. I think it’s a really flawed test. The grammar test is totally decontextualised. It just asks children to do particular things, such as identifying a noun. But 50 years of research has consistently shown that there is no relationship between doing that kind of work and what pupils do in their writing. I think children will do better in the test than they are able to in their writing because it isolates the skills so that children only have to think about one thing at a time.

Myhill adds that the test will tend to overestimate children’s ability to manipulate grammar and make appropriate choices in their writing.  It would be much more valid to assess children’s ability to manipulate grammar by looking at how they do so in the context of the pieces of writing they do in the broad curriculum they experience. This test is therefore unreliable. It is also invalid.

In her CPRT research report on assessment and standards Wynne Harlen defines consequential validity as ‘how appropriate the assessment results are for the uses to which they are put’. A test which focuses on labelling grammatical features may be valid in testing whether children know the grammatical terms, but it is not valid for making judgements about writing ability more generally. The evidence emphatically does not support Nick Gibb’s claim that the test will lead to ‘fluent, cogent and grammatically correct English’. These grammar tests will not and cannot do what the government’s rhetoric claims.

The Cambridge Primary Review Trust, like the Cambridge Primary Review, supports the use of formal assessments, in which tests have a role, as part of a broader approach to identifying how well children are learning in school and how well each school is doing, though like many others it warns against overloading such assessments with tasks like system monitoring. Wynne Harlen’s reports for CPR and CPRT, and the assessment chapters (16 and 17) in the CPR final report, remain excellent places to examine the evidence for a thoroughgoing review of the current assessment and accountability arrangements, including the place of testing within them, in England’s primary schools.

As I reminded readers in a previous blog the Cambridge Primary Review in 2010 cited assessment reform as one of eleven post-election priorities for the incoming government. Six years and a new government later,  a fundamental review of assessment and testing is still urgently needed.

Assessment reform remains a key CPRT priority. For a round-up of CPR and CPRT evidence on assessment see our Priorities in Action page. This contains links to Wynne Harlen’s CPR and CPRT research reports mentioned above, relevant blogs, CPRT regional activities, CPR and CPRT evidence to government consultations on assessment, and the many CPR publications on this topic

David Reedy is a co-Director of the Cambridge Primary Review Trust, and General Secretary of the United Kingdom Literacy Association.

 

Filed under: assessment, Cambridge Primary Review Trust, David Reedy, DfE, England, grammar, tests

May 20, 2016 by Sarah Rutty

Joyless, inaccurate, inequitable?

I recently enjoyed the opportunities provided by some longer than average train journeys and the al fresco possibilities of a sunny garden to catch up on my reading. Indeed, I diligently increased my familiarity with a wide range of books; asked questions to improve my understanding of text; summarised the main ideas drawn from more than one paragraph, and worked out the meaning of new words from context.  In short, I demonstrated the skills in reading required of upper KS2 readers.

Which has left me with rather a bee in my bonnet about last week’s KS2 SATs reading paper and its usefulness as an assessment of these skills.

My first buzzing bee in response to the paper: the quality and range of the texts provided to assess our children’s abilities as confident readers. Rather than a range of engaging writing, offering opportunities to demonstrate skills as joyful interrogators of literature and authorial craft, the test offered three rather leaden texts: two fictional and one non-fiction.  We had Maria and Oliver running off from a garden party at the big house to explore an island, which might hold the clue to the secret of a long-standing upper-class family feud. We had Maxine riding her pet giraffe, Jemmy, in South Africa, having an unfortunate encounter with some warthogs (some ferocious, others bewildered) but fortunately learning a lesson about the consequences of not listening to adults. We finished with the non-fiction text about the demise of the Dodo, a text so oddly structured that it appeared to have, rather like another curious creature, been thrown together by committee. The sun-soaked stillness of our inner-city school hall, last Monday morning, was ruffled by the occasional gentle gusting sighs of 76 children trying hard to engage with such dull texts and do so with purposeful determination ‘because I love books and I love reading and I want to do well, but it wasn’t like proper reading.’

Which brings me on to my second buzzing bee: it was most definitely not, to quote (year 6 standard pupil Shueli), anything like ‘proper reading’ nor, I would suggest, a meaningful way to assess whether our children themselves are ‘proper’ readers, using the DfE’s own interim assessment criteria.

The first four questions of the test focussed solely on vocabulary and words in context. For example, Question 1: ‘Find and copy one word meaning relatives from long ago’. If, like many of our children, you did not know the word ‘ancestor’, the answer for this question was almost impossible to work out from context.  A first mark lost and a tiny dent in the self-esteem of pupils who were hoping for a test of their ability to filter and finesse a text for nuance and meaning rather find ‘words I should have in my head, but didn’t’ (Sayma B). More gusty sighing.

Question 2 continued to dig deeper into the realm of internal word-lists: ‘the struggle had been between two rival families… which word most closely matches the meaning of the word rival? Tick one: equal, neighbouring, important, competing.  If you were not familiar with the word ‘rival’ then the choice of either ‘important’ or ‘neighbouring’ are plausible choices in context. I give you some higher order reading reasoning:  the children were at a party in a big house, clearly from ‘posh’ families – hence ‘important’ was a perfectly sensible choice; rival football teams play in the same league, so are in some way ‘neighbours’.  Both demonstrate a key year 6 reading skill: ‘working out the meaning of new words from context’, a skill our children use routinely but, in this case, cost them a mark and one more cross gained on the examiner’s recording sheet.

Bringing me onto bee no. 3: the test appeared to be designed for ease of marking. Only 2/33 questions on the test required extended ‘3 mark’ answers – allowing extended inferential or evaluative thinking – a mere 6 percent of the paper. The rest were questions requiring – much easier to mark – word or fact retrieval answers.  Our children’s reading SATs scores will reflect this unbalanced diet of question types; resulting in assessments neither accurate nor equitable. Not accurate, because teachers, using the national curriculum and 2015-16  interim assessment framework, assess year 6 readers using a much wider set of criteria – including, for example reading aloud with intonation, confidence and fluency, as well as contributions to discussions around book-talk, none of which can be assessed  by a simple test. And not equitable, because research indicates that the children most likely to under-perform in language/vocabulary biased reading tests are those from the most deprived backgrounds.

The reason for this is that children from lower income, or more socially deprived backgrounds, often come to school with a more limited vocabulary because they begin life being exposed to fewer words than children from more affluent backgrounds.  The gap this discrepancy presents is not insurmountable; the CPRT/IEE dialogic teaching project is one clear example of how putting talk at the heart of our children’s learning can help close such gaps.  However, a national testing system that skews the reading results by which children and schools are judged – and categorised – in favour of such a vocabulary-heavy bias, is simply not fair. Or purposeful.

I urge you, experienced reader, to stand for a moment in the shoes of Sheuli and Sayma B. I give you a sentence to consider, one which incorporates a word that I learnt from my own recent reading.  ‘A gust of wind rippled through the exam hall, it made me pandiculate and look hopefully at the clock. Q1: In this sentence which word most closely matches the meaning of the word pandiculate? Tick one: ponder, panic, stretch, laugh out loud.

All might seem plausible choices. The experience of the reading SATs last week may have caused our children indeed to ponder, to panic or to laugh out loud in test conditions.  It might even have made them pandiculate in earnest, for the correct answer is, of course, c) to stretch – and typically to yawn when awakening from a dull or sleepy interlude. But surely you knew that? It must be fair to assume that we all share the same internal word list. And if this is not the case (shame on you) could you not demonstrate your ability to work out the meaning of a word from the context?  No?  It cannot be that my test is flawed; it must be you who are a poor reader.  My internal bee is susurrating indeed about the value of a national test that reinforces gaps, rather than one which assesses how well we are closing them.

Sarah Rutty is Head Teacher of Bankside Primary School in Leeds, part-time Adviser for Leeds City Council Children’s Services, a member of CPRT’s Schools Alliance, and Co-ordinator of CPRT’s Leeds/West Yorkshire network. Read her previous blogs here. 

 If you work in or near Leeds and wish to become involved in its CPRT network, contact administrator@cprtrust.org.uk.

Filed under: assessment, Cambridge Primary Review Trust, equity, national curriculum, reading, Sarah Rutty, Sats, social disadvantage, tests

May 6, 2016 by Stephanie Northen

Rigor spagis

Amid the gloom of unsavoury Sats and enforced academisation, comes one delicious moment of joy. Schools minister Nick Gibb doesn’t know his subordinating conjunctions from his prepositions. He can’t answer one of the questions he has set children. Despite this woeful (in his eyes) ignorance – though, tellingly, when his mistake is pointed out he says ‘This isn’t about me’ – he has managed to become and to remain a government minister. Need one say any more about the pointlessness of the Spag test?

At least by this time next week it will all be over. The country’s 10 and 11-year-olds will be free to enjoy their final few weeks at primary school, liberated from the government’s oh so very rigorous key stage 2 tests. Like them, I am tired of fractions, tired of conjunctions, tired, in fact, of being told of the need for ‘rigour’. The Education Secretary and the Chief Inspector need to wake up to the fact that rigour is a nasty little word, suggestive of starch and thin lips. Its lack of humour and humanity makes parents and teachers recoil. Check out its origins in one of those dictionaries you recommend children use.

Hopefully the weight of protest here, echoing many in America, will force some meaningful concessions from the ‘rigour revolutionaries’ in time for next year’s tests. Either that, or everyone with a genuine interest in helping young children learn will stand up and say No.  In the words of CPRT Priority 8, Assessment must ‘enhance learning as well as test it’, ‘support rather than distort the curriculum’ and ‘pursue standards and quality in all areas of learning, not just the core subjects’. The opposite is happening at the moment in the name of rigour. It’s not rigour – but it is deadly.

Of course, the memory of subordinating conjunctions and five-digit subtraction by decomposition will fade for the current Year 6s – and for Nick Gibb – unless they turn out to have failed the tests. Mrs Morgan will decide just how rigorous she wants to be in the summer. Politics will determine where she draws the line between happy and sad children. Politics will decide the proportion she brands as failures at age 11, forced to do the tests again at secondary school.

But still the children have these few carefree weeks where primary school can go back to doing what primary school does best – encouraging enquiry into and enjoyment of the world around us. Well, no. Teachers still have to assess writing. And if my classroom is anything to go by, writing has been sidelined over the past few weeks in the effort to cram a few more scraps of worthless knowledge into young brains yearning to rule the country.

So how do we teachers judge good writing? Sadly, that’s an irrelevant question. Don’t bother drawing up a mental list of, for example, exciting plot, imaginative setting, inventive language, mastery of different genres. No, teachers must assess using Mrs Morgan’s leaden criteria, criteria that would never cross the mind of a Man Booker prize judge. Marlon James, last year’s Booker winner and a teacher of creative writing, was praised for a story that ‘traverses strange landscapes and shady characters, as motivations are examined – and questions asked’. No one commented on James’s ability to ‘use a range of cohesive devices, including adverbials, within and across sentences and paragraphs’.

The dead hand of rigour decrees that we judge children’s ability to employ ‘passive and modal verbs mostly appropriately’. We have to check that they use ‘adverbs, preposition phrases and expanded noun phrases effectively to add detail, qualification and precision’. (Never mind thrilling, moving or frightening, I do love a story to be detailed, precise and qualified.) We forget to read what the children have actually written in the hunt for ‘inverted commas, commas for clarity, and punctuation for parenthesis [used] mostly correctly, and some correct use of semi-colons, dashes, colons and hyphens’. Finally, it goes without saying that young children must ‘spell most words correctly’.

There are eight criteria in the Government’s interim framework for writing at the ‘expected standard’ – expected by whom, one is tempted to ask. Only one of the eight relates to the point of putting pen to paper in the first place. Aside from ‘the pupil can create atmosphere, and integrate dialogue to convey character and advance the action’, the writing criteria spring entirely from the Government’s obsession with grammar, punctuation and spelling. I fear it is only too easy to meet the ‘expected standard’ with writing that is as lifeless, uninspiring and rigorous as the criteria themselves.

If writing is not to entertain and inform, then why bother? In the old days of levels, teachers had to tussle with Assessment of Pupil Performance Grids – a similar attempt to standardise the marking of a creative activity. But at least the APP grids acknowledged that good writing should make an impact. Texts should be ‘imaginative, interesting and thoughtful’. Sentence clauses and vocabulary should be varied not to tick a grammar checklist box but to have an ‘effect’ on the reader.

So now we have to knuckle down and make sure children’s writing satisfies the small-minded rigour revolutionaries. Can we slip in a semi-colon and a couple of brackets without spoiling the flow of a youthful reworking of an Arthurian legend? How many times can we persuade our young authors to write out their stories in order to ensure ‘most’ words are spelt correctly. And what to do about those blank looks when we suggest that they repeat a phrase from one paragraph to the next to ensure they have achieved ‘cohesion’?

Mrs Morgan claims the ‘tough’ new curriculum will foster a love of literature. This is a mad, topsy-turvy world that includes too many ‘strange landscapes and shady characters’. It is good, at last, that ‘motivations are examined – and questions asked’. Keep up the good work, everyone. We can stop the rigour revolutionaries.

Stephanie Northen is a primary teacher and journalist and one of our regular bloggers. She contributed to the Cambridge Primary Review final report and is a member of the Board of the Cambridge Primary Review Trust.

Filed under: assessment, Cambridge Primary Review, DfE, Sats, Stephanie Northen, tests

March 18, 2016 by Nancy Stewart

Baseline assessment – we’re not buying

Last autumn hundreds of thousands of four-year-old children, as diverse as any group pulled from the population, were each assigned a single number score which purported to predict their future progress in learning. They had been baselined.

They arrived in reception classes from school nurseries, from private or voluntary nurseries, from childminders, and from homes where they had no previous experience of early years provision.  They came from families awash with books, talk and outings, from families struggling with the economic and emotional pressures of life, and from troubled backgrounds and foster care.  Some were hale and hearty while others had a range of health conditions or special needs.  Some were fluent in English and some spoke other languages with little or no English.  And some were nearly a year older than the youngest, 25 percent of their life at that age. Yet with no quarter given for such differences, they were labelled with a simple number score within six weeks of arriving at school.

In the face of widespread and vehement opposition (including from CPRT’s directors) DfE had decided on baseline assessment not to support children’s learning, but as a primary school accountability measure for judging schools when these children reach Year 6. Unsurprisingly, recent press reports indicate that lack of comparability between the three approved schemes may mean the baseline policy will be scrapped – possibly in favour of a simple ‘readiness check’.  Frying pans and fires come to mind.

The fact is, there is no simple measure that can accurately predict the trajectory of a group of such young children. This is not to deny a central role for assessment.  Teachers assess children on arrival in a formative process of understanding who they are, what they know and understand, how they feel and what makes them tick, in the service of teaching them more effectively.

As CPRT has frequently pointed out in its evidence to government assessment reviews and consultations, the confusion of assessment with accountability results in a simplistic number score which ignores the range and complexity of individual learning and development, over-emphasises the core areas of literacy and maths required by the DfE, and places a significant additional burden on teachers in those important early weeks of forming relationships and establishing the life of a class.

Recent research by the Institute of Education into the implementation of baseline assessment confirms that teachers found the process added to their workloads, yet only 7.7 percent thought it was an ‘accurate and fair way to assess children’ and 6.7 percent agreed it was ‘a good way to assess how primary schools perform’.

What is the harm?  Aside from the waste of millions of pounds of public money going to the private baseline providers, there is concern about the impact of the resulting expectations on children who receive a low score within their first weeks in school, and who may start out at age four wearing an ‘invisible dunce’s cap’. CPRT drew attention to this risk in its response to the accountability consultation, saying ‘Notions of fixed ability would be exacerbated by a baseline assessment in reception that claimed to reliably predict future attainment.’ For children whose life circumstances place them at risk of low achievement in school, being placed in groups for the ‘slower’ children and subjected to an intense diet of literacy and numeracy designed to help them ‘catch up’ will deny them the rich experiences that should be at the heart of their early years in school to provide them with the foundation they need.

Better Without Baseline echoes CPR’s statement on assessment in its 2010 list of policy priorities for the Coalition government. CPR urged ministers to:

Stop treating testing and assessment as synonymous … The issue is not whether children should be assessed or schools should be accountable – they should – but how and in relation to what.

Unfortunately policy makers are seduced by the illusion of scientific measurement of progress, using children’s scores to judge the quality of schools.  Yet there are more valid ways to approach accountability.  Arguing for a more comprehensive framework, Wynne Harlen said in her excellent research report for CPRT:

What is clearly needed is a better match between the standards we aim for and the ones we actually measure (measuring what we value, not valuing what we measure). And it is important to recognise that value judgements are unavoidable in setting standards based on ‘what ought to be’ rather than ‘what is’.

Baseline assessment is not a statutory requirement, and this year some 2000 schools decided not to opt in; that remains a principled option for the future.  We can hope that government will think again, and remove the pressure on schools to buy one of the current schemes.   What is needed is not a quick substitute of another inappropriate scheme such as a ‘readiness check’, but a full and detailed review of assessment and accountability from the early years onward, where education professionals come together to discuss and define what matters.  The aim should be to design a system of measurement that is respected, useful and truly supports accountability not only for public investment but most importantly to the learners we serve.

Nancy Stewart is Deputy Chair of TACTYC, the Association for Professional Development in Early Years.

Filed under: assessment, baseline assessment, Cambridge Primary Review Trust, curriculum, DfE, early years, evidence, Nancy Stewart, policy, tests

February 26, 2016 by Stephanie Northen

Boycott the Sats

Schools are scrambling to prepare children for the new Sats tests to be taken in May. Teachers who have never before ‘taught to the test’ are gloomily conceding that the Government has left them no other option. Children are being drilled in the mechanics of adverbial clauses and long division, forced to spell vital words such as ‘pronunciation’ and ‘hindrance’, and helped to write stories showing their mastery of the passive voice and modal verbs. Headteachers shake their heads sorrowfully. Advisers grimace and say ‘nothing to do with us’. None of this should be happening – and finally there is just a possibility that the madness will stop.

First, came the suggestion of a boycott of baseline testing. Now, at last, there is a call to cancel the KS2 Sats. I watch the signatures grow daily (hourly) hoping they represent the moment when the classroom worms such as myself finally turned and said no.

The only word on the spelling list that Year 6s should learn is ‘sacrifice’ as this will enable them to write to the Education Secretary pointing out that their learning is being sacrificed to a mean-spirited and regressive assessment system.

The new tests and assessments have been designed, not to discover and celebrate what children can do, but to catch them out. Here are just a few examples. It is essential, says the Government, that KS1 children are taught maths using concrete and visual aids such as Numicon and number squares.  So which callous wretch decreed that the same children must be deprived of these props when taking their Sats? In my school, children wept because they could see the number squares but were not allowed to use them.

I’d also like to meet the sour-faced creators of the sample grammar test who asked Year 6 children to add suffixes to nouns to create adjectives (clearly a life skill) and then decided that getting five out of six right merited no marks. Likewise I’d love to shake hands with the generous soul who recently decreed nul points for any child misplacing a comma when separating numbers in the thousands in their KS2 maths Sats. Similarly, there’s no mercy for the left-handed child who struggles to join up his handwriting or for the fast-thinking kid who writes brilliantly but forgets her full stops.

Assessment should, as CPRT recommends, ‘enhance learning as well as test it’ and ‘support rather than distort the curriculum.’

Assessment in the CPRT spirit ensures that the children in my class are set a range of appropriate challenges every day. As a consequence, they are all generally making cheerful progress, though some more rapidly than others and some have greater strengths in one area of learning than another. Yes, I’m sorry to say, they are inconsistent: they have been known to go backwards and they even make mistakes. In other words, they are human beings.

The new tests have not been designed with humans in mind, let alone small humans. Rather they have been created by cyborgs for baby cyborgs. If you don’t believe me, watch the bizarre 2016 KS2 assessment webinar from the Standards and Testing Agency.

The STA cyborgs explain why it was necessary to ditch the ‘best fit’ model of levels where teachers, heaven forbid, used their professional judgement to decide if a child had ticked enough boxes to be awarded a level 3c. According to the cyborgs, parents were confused by their 3c kid being able to do, or not do, things that their mate’s 3c kid could or couldn’t do. Clearly in cyborg land, parents had nothing better to do than check that all children assessed at the same level had exactly the same set of skills and knowledge. This is just so ridiculous as to be laughable were it not for what is currently happening in the nation’s classrooms.

In terms of teacher assessment, best fit has been replaced by perfect fit. Now we have to tick all the boxes in order to judge children to be working at the ‘expected standard’. If just one box cannot be ticked, children are classified as ‘working towards’. However, it doesn’t end there. If a child doesn’t tick all those boxes, they will cascade back down the (not) levels potentially all the way to Year 1. Take writing as an example. One of the best young writers I have taught would not qualify even as ‘working towards’ because she didn’t join up her handwriting. Likewise imaginative but dyslexic 10-year-olds will slip down and down because they can’t spell words supposedly appropriate for eight-year-olds such as ‘occasionally’, ‘reign’ or ‘possession’. And, according to the STA cyborgian webinar, there is ‘no flexibility’ on this.

Yet not only is it inflexible, mean spirited and regressive, it is also just not fair. The current Year 6 has only had two years at most to prepare for this newly punitive world of harder work and pitiless mark schemes. And make that only three months in terms of writing where standards have been wrenched up from a level 4b to a 5c. Parents do not know what harsh judgements await their children this summer. If they did, chances are they would support a boycott. As Warwick Mansell, writing in a CPRT blog back in 2014, wisely commented:

The single ‘working at national standard’ – or not – verdict, where it is to be offered, also seems to invite a simple ‘pass/fail’ judgement. This, it is hard to avoid thinking, will set up the view among many children that they are failures at an early age.

And not only the children. Teachers are under tremendous pressure to ensure their pupils reach the new standards in reading, maths, grammar and writing – irrespective of individual strengths or weaknesses. Yet we are supposed to achieve this without allowing these standards ‘to guide individual programmes of study, classroom practice or methodology’ as the STA disingenuously insists. Sometimes I wonder if I’m stupid or perhaps was asleep when a new era of Orwellian doublethink dawned. How on earth do children learn to ‘use passive and modal verbs mostly appropriately’ if I don’t explicitly teach them? How do they learn to add, subtract or multiply fractions without that being a programme of study? Perhaps in some Utopian classroom there is a lesson plan – presumably in something tasteful like quilting – that miraculously transfers this knowledge to children in such a form that they can pass their Sats and meet the ‘expected standard’.

But until someone lends me that plan, I’d rather we all just said no.

Stephanie Northen is a primary teacher and journalist. She contributed to the Cambridge Primary Review final report and is a member of the Board of the Cambridge Primary Review Trust.

Filed under: assessment, Cambridge Primary Review Trust, DfE, Sats, Stephanie Northen, tests

October 30, 2015 by Robin Alexander

Face the music

Opera North has reported dramatic improvements in key stage 2 test results in two primary schools, one in Leeds, the other in Hull, and both in areas deemed severely deprived. ‘Dramatic’ in this instance is certainly merited: in one of the schools the proportion of children gaining level 4 in reading increased from 78 per cent in 2014 to 98 per cent in 2015, with corresponding increases in writing (75 to 86 per cent) and mathematics (73 to 93 per cent).

But what, you may ask, has this to do with opera?  Well, since 2013 the schools in question – Windmill Primary in Leeds and Bude Park Primary in Hull – have been working with Opera North as part of the Arts Council and DfE-supported In Harmony programme. This aims ‘to inspire and transform the lives of children in deprived communities, using the power and disciplines of community-based orchestral music-making.’  Opera North’s In Harmony project, now being extended, is one of six, with others in Gateshead, Lambeth, Liverpool, Nottingham and Telford. In the Leeds project, every child spends up to three hours each week on musical activity and some also attend Opera North’s after-school sessions. Most children learn to play an instrument and all of them sing. For the Hull children, singing is if anything even more important. Children in both schools give public performances, joining forces with Opera North’s professional musicians. For the Leeds children these may take place in the high Victorian surroundings of Leeds Town Hall.

Methodological caution requires us to warn that the test gains in question reflect an apparent association between musical engagement and standards of literacy and numeracy rather than the proven causal relationship that would be tested by a randomised control trial (and such a trial is certainly needed).  But the gains are sufficiently striking, and the circumstantial evidence sufficiently rich, to persuade us that the relationship is more likely to be causal than not, especially when we witness how palpably this activity inspires and sustains the enthusiasm and effort of the children involved. Engagement here is the key: without it there can be no learning.

It’s a message with which for many years arts organisations and activists have been familiar, and which they have put into impressive practice.  To many members of Britain’s principal orchestras, choirs, art galleries, theatres and dance companies, working with children and schools is now as integral to their day-to-day activity as the shows they mount, while alongside publicly-funded schemes like In Harmony, the Prince’s Foundation for Children and the Arts pursues on an even larger scale the objective of immersing disadvantaged children in the arts by taking them to major arts venues and enabling them to work with leading arts practitioners.  Meanwhile, outside such schemes many schools develop their own productive partnerships with artists and performers on a local basis.

Internationally, the chance move of a major German orchestra’s headquarters and rehearsal space into a Bremen inner-city secondary school created first unease, then a dawning sense of opportunity and finally an extraordinary fusion of students and musicians, with daily interactions between the two groups, students mingling with orchestra members at lunch and sitting with them rehearsals, and a wealth of structured musical projects.

But perhaps the most celebrated example of this movement is Venezuela’s El Sistema, which since 1975 has promoted ‘intensive ensemble participation from the earliest stages, group learning, peer teaching and a commitment to keeping the joy of musical learning and music making ever-present’ through participation in orchestral ensembles, choral singing, folk music and jazz. El Sistema’s best-known ambassador in the UK – via its spectacular performances at the BBC Proms – is the Simon Bolivar Youth Orchestra, and it is El Sistema that provides the model for In Harmony, as it does, obviously, for Sistema Scotland with its ‘Big Noise’ centres in Raploch (Stirling), Govanhill (Glasgow) and Torry (Aberdeen).

By and large, the claims made for such initiatives are as likely to be social and personal as musical, though Geoffrey Baker  has warned against overstating their achievements and even turning them into a cult. Thus Sistema Scotland’s Big Noise is described as ‘an orchestra programme that aims to use music making to foster confidence, teamwork, pride and aspiration in the children taking part’.  There are similar outcomes from Deutsche Kammerphilharmonie Bremen’s move into the Tenever housing estate, with dramatic improvements reported in pupil behaviour and the school’s reputation transformed from one to be avoided to one to which parents from affluent parts of the city now queue to send their children.

Similarly, the initial NFER evaluation report on In Harmony cites ‘positive effects on children’s self-esteem, resilience, enjoyment of school, attitudes towards learning, concentration and perseverance’ with, as a bonus, ‘some perceived impact on parents and families including raised aspirations for their children, increased enjoyment of music and confidence in visiting cultural venues, and increased engagement with school.’  Children and the Arts sees early engagement with the arts through its Quest and Start programmes as a way of ‘raising aspirations, increasing confidence, improving communication skills andunlocking creativity.’ Such engagement is offered not only in ‘high-need areas where there is often socio-economic disadvantage or low arts access’ but also, through the Start Hospices programme, to children with life-limiting and life-threatening illnesses and conditions.

The SAT score gains from Opera North’s In Harmony projects in Leeds and Hull add a further justificatory strand; one, indeed, that might just make policymakers in their 3Rs bunker sit up and take notice.  For while viewing the arts as a kind of enhanced PSHE – a travesty, of course – may be just enough to keep these subjects in the curriculum, demonstrating that they impact on test scores in literacy and numeracy may make their place rather more secure.

This, you will say, is unworthily cynical and reductive. But cynicism in the face of policymakers’ crude educational instrumentality is, I believe, justified by the curriculum utterances and decisions of successive ministers over the past three decades, while the reductiveness is theirs, not mine. Thus Nicky Morgan excludes the arts from the EBacc, but in her response to the furore this provokes she reveals the limit of her understanding by confining her justification for the arts to developing pupils’ sense of ‘Britishness’, lamely adding that she ‘would expect any good school to complement [the EBacc subjects] with a range of opportunities in the arts’.  ‘A range of opportunities’ – no doubt extra-curricular and optional – is hardly the same as wholehearted commitment to convinced, committed and compulsory arts education taught with the same eye to high standards that governments reserve for the so-called core subjects.  Underlining the poverty of her perspective, Morgan tells pupils that STEM subjects open career options while arts subjects close them.

What worries me no less than the policy stance – from which, after all, few recent Secretaries of State have deviated – is the extent to which, in our eagerness to convince these uncomprehending ministers that the arts and arts education are not just desirable but essential, we may deploy only those justifications we think they will understand, whether these are generically social, behavioural and attitudinal (confidence, self-esteem) or in the realm of transferable skills (creativity, literacy, numeracy), or from neuroscience research (attention span, phonological awareness, memory). The otherwise excellent 2011 US report on the arts in schools from the President’s Committee on the Arts and Humanities falls into the same trap of focussing mainly on social and transferable skills, though it does at least synthesise a substantial body of research evidence on these matters which this country’s beleaguered advocates of arts education will find useful.

Let me not be misunderstood: the cognitive, personal and social gains achieved by El Sistema, Children and the Arts, In Harmony and similar ventures are as impressive as they are supremely important for children and society, especially in cultures and contexts where children suffer severe disadvantage.  And if it can be shown that such experiences enhance these children’s mastery of literacy and numeracy, where in the words of CPRT’s Kate Pickett, they encounter a much steeper ‘social gradient’ than their more affluent peers, then this is doubly impressive.

But the danger of presenting the case for arts education solely in these terms, necessary in the current policy climate though it may seem to be, is that it reduces arts education to the status of servant to other subjects, a means to someone else’s end (‘Why study music?’ ‘To improve your maths’) rather than an end in itself; and it justifies the arts on the grounds of narrowly-defined utility rather than intrinsic value. It also blurs the vital differences that exist between the various arts in their form, language, practice, mode of expression and impact.  The visual arts, music, drama, dance and literature have elements in common but they are also in obvious and fundamental ways utterly distinct from each other. They engage different senses, require different skills and evoke different responses – synaptic as well as intellectual and emotional. All are essential. All should be celebrated.

This loss of distinctiveness is perhaps unwittingly implied by the evaluation of the only Education Endowment Foundation (EEF) project in this area. EEF evaluates ‘what works’ interventions designed to enhance the literacy and numeracy attainment of disadvantaged pupils (including CPRT’s own dialogic teaching project) and its ‘Act, Sing, Play’ project has tested the relative impact of music and drama on the literacy and numeracy attainment of Year 2 pupils. It found no significant difference between the two subjects. So, in the matter of using the arts as a way to raise standards in the 3Rs, do we infer that any art will do?

So, yes, the power of the arts, directly experienced and expertly taught, is such that they advance children’s development, understanding and skill beyond as well as within the realms of the auditory, visual, verbal, kinaesthetic and physical. And yes, it should be clearly understood that while the arts can cultivate affective and social sensibilities, when properly taught they are in no way ‘soft’ or intellectually undemanding, and to set them in opposition to so-called ‘hard’ STEM subjects, as Nicola Morgan did, is as crass as claiming that creativity has no place in science or engineering. But until schools have the inclination and confidence to champion art for art’s sake, and to make the case for each art in its own terms, and to cite a wider spectrum of evidence than social development alone, then arts education will continue to be relegated to curriculum’s periphery.

For this is a historic struggle against a mindset that is deeply embedded and whose policy manifestations include a national curriculum that ignores all that we have to come know about the developmental and educative power of the arts, and indeed about its economic as well as cultural value, and perpetuates the same ‘basics with trimmings’ curriculum formula that has persisted since the 1870s and earlier.

That’s why the Cambridge Primary Review argued that the excessively sharp differentiation of ‘core’ and ‘foundation’ subjects should cease and all curriculum domains should be approached with equal seriousness and be taught with equal conviction and expertise, even though, of course, some will be allocated more teaching time than others. This alternative approach breaks with the definition of ‘core’ as a handful of ring-fenced subjects and allows us instead to identify core learnings across a broader curriculum, thereby greatly enriching children’s educational experience, maximising the prospects for transfer of learning from one subject to another, and raising standards.

Seriousness, conviction, expertise: here we confront the challenge of teaching quality. Schemes like Sistema, In Harmony and those sponsored by Children and the Arts succeed because children encounter trained and talented musicians, artists, actors and dancers at the top of their game.  These people provide inspirational role models and there is no limit to what children can learn from them. In contrast, music inexpertly taught – and at the fag-end of the day or week, to boot – not only turns children off but also confirms the common perception that music in schools is undemanding, joyless and irrelevant. Yet that, alas, is what too many children experience. For notwithstanding the previous government’s investment in ‘music hubs’, Ofsted remains pessimistic as to both the quality of music teaching and – no less serious – the ability of some school leaders to judge it and take appropriate remedial action, finding them too ready to entertain low expectations of children’s musical capacities.

But then this is another historic nettle that successive governments have failed to grasp. In its final report  the Cambridge Primary Review recommended (page 506) a DfE-led enquiry into the primary sector’s capacity and resources to teach all subjects, not just ‘the basics’, to the highest standard, on the grounds that our children are entitled to nothing less and because of what inspection evidence consistently shows about the unevenness of schools’ curriculum expertise. DfE accepted CPR’s recommendation and during 2010-12 undertook its curriculum capacity enquiry, in the process confirming CPR’s evidence, arguments and possible solutions. However, for reasons only DfE can explain, the resulting report was never made public (though as the enquiry’s adviser I have seen it).

In every sense it’s time to face the music.

As well as being Chair of the Cambridge Primary Review Trust, Robin Alexander is a Trustee of the Prince’s Foundation for Children and the Arts.

 www.robinalexander.org.uk

Filed under: arts education, assessment, Cambridge Primary Review Trust, creativity, disadvantage, evidence, music education, national curriculum, Robin Alexander, tests

Contact

Cambridge Primary Review Trust - Email: administrator@cprtrust.org.uk

Copyright © 2025