The Cambridge Primary Review Trust

Search

  • Home
    • CPRT national conference
    • Blog
    • News
  • About CPRT
    • Overview
    • Mission
    • Aims
    • Priorities
    • Programmes
    • Priorities in Action
    • Organisation
    • People
      • National
    • Professional development
    • Media
  • CPR
    • Overview
    • Remit
    • Themes
    • Themes, Perspectives and Questions in Full
    • Evidence
    • People
    • CPR Publications
    • CPR Media Coverage
    • Dissemination
  • Networks
    • Overview
    • Schools Alliance
  • Research
    • Overview
    • CPRT/UoY Dialogic Teaching Project
    • Assessment
    • Children’s Voice
    • Learning
    • Equity and Disadvantage
    • Teaching
    • Sustainability and Global Understanding
    • Vulnerable children
    • Digital Futures
    • Demographic Change, Migration and Cultural Diversity
    • Systemic Reform in Primary Education
    • Alternative models of accountability and quality assurance
    • Initial Teacher Education
    • SW Research Schools Network
    • CPR Archive Project
  • CPD
  • Publications
  • Contact
    • Contact
    • Enquiries
    • Regional
    • School
    • Media
    • Other Organisations

October 21, 2016 by David Reedy

Assessment, testing and accountability: a suggestion for an alternative framework

The data from the new 2016 tests for 11 year olds in England is gradually trickling out. We have been informed that 48 percent of the children did not reach the new expected standards in reading, writing and mathematics combined (compared to 35 percent in 2015 under the old system) and are at risk of being labelled ‘failures’. In addition, the calculations have been done to identify each Y6 child’s scaled score and progress measure. Parents have been told something like ‘In reading your child got 99 on the scaled score against the expected standard and 1.6 progress score’. Not terrifically helpful, particularly if the parent has become familiar with Levels over the last 28 years.

Combined with the anecdotal evidence about the problems children had with the reading test, and the abandonment of the grammar test for seven year olds after it was inadvertently leaked, it is no surprise that more and more educationists, parents and organisations are calling for a fundamental review.

I have written in previous blogs about the current system and its shortcomings, now exacerbated by the 2016 experience, drawing on Wynne Harlen’s 2014 research report for CPRT Assessment, Standards and Quality of Leaning In Primary Education which outlines the evidence concerning the impact of high stakes testing and compares England’s system with those of a number of other countries. Harlen’s key point that ‘the system …. for primary schools in England still suffers from over-dependence on testing and the use of end of Key Stage 2 tests for too many purposes’ (p. 32) indicates that we must consider a fundamentally different approach .

In this blog I outline the key strands which I think would need to be considered  under any review, with some suggestions concerning what should be incorporated, based on the available evidence.

The three strands for a comprehensive system of assessment and accountability are at individual child level, school level and national level.

At individual child level the focus must be assessment for learning and assessment of learning (i.e. formative and summative assessment). Assessment must be used to help children while they are learning and to find out what they have learned at a particular point in time.  Testing can be a part of this as it can inform overall teacher assessment and help to identify any potential gaps in learning.  However tests cannot give all the information needed to take a rounded view of what children need to learn and what they know and can do. As Harlen states: ‘the evidence shows that when teachers draw on a range of sources of evidence, then discuss and moderate with other teachers, assessment is more accurate’. Depending on the score from an externally marked, single test of reading at 11, for example, to identify reading ability is simply not enough evidence to make a reliable judgement.

As a first move in this direction, the system currently used for seven year olds should be adopted at the end of KS2; teacher assessment based on a range of evidence, including but not determined by a formal test.

In addition the plethora of evidence-based assessment resources available should be utilised to underpin an approach that is qualitative as well as quantitative. For example there are the CLPE/UKLA et al Reading and Writing Scales which can be used for identifying children’s progress as well as indicating next steps for learning. It is also worth looking at the end of each of these scales where there is an extensive bibliography showing how they are firmly based in research evidence. Something DfE might consider doing.

In summary, the principle that assessment of any kind should ultimately improve learning for children is central and should be the criterion against which all assessment practices in and beyond school should be judged.

At school level the focus must be on partnership in assessment as well as accountability. Firstly, that means not only being accountable to parents and the local community the school serves, but also working systematically with them as partners.

Parents have a key role to play in assessment which goes beyond being regularly reported to and includes the sharing of information about the progress of their children both within and beyond school to obtain a fully informed picture. This would be followed by discussions concerning what the school is doing more generally to promote learning across all aspects of learning.

Schools should hold themselves to account through systematic self evaluation. This self evaluation should be externally moderated by local partners, crucially through strengthened local authorities, and nationally through a properly independent HMI. However the system should not feel, as it does to many schools under the current arrangements, as punitive, but developmental and supportive, including when a school is not doing as well as it should.  Any moderated self evaluation should be formative for the school as well as demonstrating accountability.

CPRT responded by making assessment reform one of its eight priorities, aiming to

Encourage approaches to assessment that enhance learning as well as test it, that support rather than distort the curriculum and that pursue standards and quality in all areas of learning, not just the core subjects.

CPRT’s Priorities in Action assessment webpage lists our multifaceted response to this priority including reports, briefings, blogs, parliamentary and government submissions and purpose-designed CPD for schools.

The final report of the Cambridge Primary Review was also clear that inspection needed to change (p. 500) and recommended that a new model be explored which focussed much more on classroom practice, pupil learning and the curriculum as a whole.

In any review of assessment, the accountability system must be reviewed at the same time. That goes for accountability at national level too.

Current arrangements at primary level are both narrow, only focusing on some aspects of core subjects, and useless for making comparisons across time as the criteria and tests keep changing. A system of sample surveys should be formulated to monitor national standards. These would be based on a large number of components and be able to extend well beyond the core subjects if a rolling programme was organised. England would then be able to judge whether primary education as a whole, in all its aspects, based on a comprehensive set of aims, was being successful and was improving over time. Currently this is impossible to do.

Thus is not surprising that more and more people and organisations are, alongside CPRT, calling for a fundamental review of assessment, testing and accountability and that a major campaign is about to get underway. This campaign is to be called ‘More than A Score’ and a major conference has been announced for December 3rd. CPRT fully supports this campaign.

This move to a more effective approach would not be a simple process. As CPR’s final report stated in 2010 ‘Moving to a valid, reliable and properly moderated procedures for a broader approach to assessment will require careful research and deliberation’ (p. 498)

It will take some time, but I believe, for all involved, it will be well worth the effort.

Just as this blog was being prepared, Justine Greening, Secretary of State for Education, made an announcement about primary school assessment. This included a commitment to ‘ setting out steps to improve and simplify assessment arrangements’, the abandonment of Y7 resits, and no new tests to be introduced before the 2018/19 academic year. There is a welcome acknowledgement in the tone of the statement that current arrangements are not working, although the last point has alarming implications about the introduction of further, unnecessary, high stakes tests.

The Secretary of State also announced another consultation, to take place next year, on assessment, testing and accountability. We have seen many of these so called ‘consultations’ before where the views of educationalists and the evidence from research and experience have been completely ignored.

Another ‘consultation’ is not needed, What is needed is a thorough, independent, review where all stakeholders are represented and a government that is prepared to listen and respond positively.

David Reedy is a co-Director of the Cambridge Primary Review Trust, and General Secretary of the United Kingdom Literacy Association.

Filed under: accountability, assessment, Cambridge Primary Review Trust, David Reedy, DfE, England, inspection, tests

May 27, 2016 by David Reedy

Time for radical change: grammar testing in England’s primary schools  

It has not been a good couple of weeks for testing in England’s primary schools.

There have been leaks of both the KS1 and KS2 spelling, grammar and punctuation tests, leading to the KS1 test being scrapped for this year and accusations by ministers that malign forces are at work to undermine the government’s education reform process.

Baseline assessment for four year olds has also gone, as its unreliability for accountability purposes became so obvious that continuation became untenable. (Not that the problems with testing and accountability are unfamiliar to teachers or parents, as Stephanie Northen and Sarah Rutty reminded us in their powerful recent blogs).

Even before his problem with subordinating conjunctions, Nick Gibb was complaining about the current situation in a speech at the ASCL curriculum summit on 27 April:

You do not need me to tell you that the implementation of the new key stage one and key stage two tests has been bumpy, and I and the department are more than willing to accept that some things could have been smoother. The current frameworks for teacher assessment, for example, are interim, precisely because we know that teething problems that exist in this phase of reform need to leave room for revision.

‘Teething problems’ is a bit of an understatement.

The Cambridge Primary Review Trust is committed to looking at what the widest range of available evidence tells us about assessment and assessment reform, including from experience such as Stephanie’s and Sarah’s as well as formal research, and it argues that decisions should be made at both policy and classroom level based upon that evidence.

I want to briefly look at the research evidence on the grammar tests for seven and eleven year olds and the government’s claims for them, to complement and add to the blogs of the last fortnight.

Nick Gibb argued in his ASCL speech, as well as on earlier occasions, that testing is a way of raising standards in the core areas of reading, writing and mathematics. He said:

Against those who attack the underlying principle of these reforms, I stand firm in my belief that they are right and necessary. Our new tests in grammar, punctuation and spelling have been accused by many in the media of teaching pupils redundant or irrelevant information. One fundamental outcome of a good education system must be that all children, not just the offspring of the wealthy and privileged, are able to write fluent, cogent and grammatically correct English.

He thus conflates performance in these tests with writing fluently and cogently. But the evidence that a test will help the children to get better at writing when it asks six and seven year olds to identify an adverb in ‘Jamie knocked softly on his brother’s bedroom door’ or to decide whether ‘One day, Ali decided to make a toy robot’ is a question, statement, command or an exclamation, simply doesn’t exist. The experience of this year’s Y2 and Y6 children, before the requirement to do the Y2 test was dropped, was in many cases, that of separate grammar lessons where they were trained for the test, making sure they could identify word classes and sentence types through decontextualised exercises, so that they would be able to answer questions like these. If the test is reintroduced in 2017 this will happen again, distorting the curriculum with little or no benefit to pupils.

I make this claim because the research evidence over many years is unequivocal. Debra Myhill, who with her colleagues at Exeter University has extensively investigated the teaching of grammar and has shown that explicit attention to grammar in the context of ongoing teaching can help pupils to improve their writing, summarised that evidence in an April 2013 TES article. She wrote:

I did a very detailed analysis of the test and I had major reservations about it. I think it’s a really flawed test. The grammar test is totally decontextualised. It just asks children to do particular things, such as identifying a noun. But 50 years of research has consistently shown that there is no relationship between doing that kind of work and what pupils do in their writing. I think children will do better in the test than they are able to in their writing because it isolates the skills so that children only have to think about one thing at a time.

Myhill adds that the test will tend to overestimate children’s ability to manipulate grammar and make appropriate choices in their writing.  It would be much more valid to assess children’s ability to manipulate grammar by looking at how they do so in the context of the pieces of writing they do in the broad curriculum they experience. This test is therefore unreliable. It is also invalid.

In her CPRT research report on assessment and standards Wynne Harlen defines consequential validity as ‘how appropriate the assessment results are for the uses to which they are put’. A test which focuses on labelling grammatical features may be valid in testing whether children know the grammatical terms, but it is not valid for making judgements about writing ability more generally. The evidence emphatically does not support Nick Gibb’s claim that the test will lead to ‘fluent, cogent and grammatically correct English’. These grammar tests will not and cannot do what the government’s rhetoric claims.

The Cambridge Primary Review Trust, like the Cambridge Primary Review, supports the use of formal assessments, in which tests have a role, as part of a broader approach to identifying how well children are learning in school and how well each school is doing, though like many others it warns against overloading such assessments with tasks like system monitoring. Wynne Harlen’s reports for CPR and CPRT, and the assessment chapters (16 and 17) in the CPR final report, remain excellent places to examine the evidence for a thoroughgoing review of the current assessment and accountability arrangements, including the place of testing within them, in England’s primary schools.

As I reminded readers in a previous blog the Cambridge Primary Review in 2010 cited assessment reform as one of eleven post-election priorities for the incoming government. Six years and a new government later,  a fundamental review of assessment and testing is still urgently needed.

Assessment reform remains a key CPRT priority. For a round-up of CPR and CPRT evidence on assessment see our Priorities in Action page. This contains links to Wynne Harlen’s CPR and CPRT research reports mentioned above, relevant blogs, CPRT regional activities, CPR and CPRT evidence to government consultations on assessment, and the many CPR publications on this topic

David Reedy is a co-Director of the Cambridge Primary Review Trust, and General Secretary of the United Kingdom Literacy Association.

 

Filed under: assessment, Cambridge Primary Review Trust, David Reedy, DfE, England, grammar, tests

March 13, 2015 by David Reedy

Are we nearly there yet?

February saw a flurry of government announcements about assessment in English schools.

On 4 February information about reception baseline assessment was published. In summary this states that from September 2015 schools may use a baseline assessment on children’s attainment at the beginning of the reception year. DFE has commissioned six providers which are listed in the document. Schools can choose the provider they prefer. This is not compulsory but the guidance states:

Government-funded schools that wish to use the reception baseline assessment from September 2015 should sign up by the end of April. In 2022 we’ll then use whichever measure shows the most progress: your reception baseline to key stage 2 results or your key stage 1 results to key stage 2 results.

From September 2016 you’ll only be able to use your reception baseline to key stage 2 results to measure progress. If you choose not to use the reception baseline, from 2023 we’ll only hold you to account by your pupils’ attainment at the end of key stage 2.

The Early Years Foundation Stage Profile (EYFSP) stops being compulsory in September 2016 too.

DfE is therefore essentially ensuring that emphasis is placed on a narrowing measure of attainment in language, literacy and mathematics (with a few small extra bits in most of the six cases), rather than an assessment which presents a much more holistic view of a child’s learning and development. There is a veiled threat implied in the information quoted above. If a school doesn’t use one of these baselines, progress will not be taken into account when a primary school is judged as good or not. Not doing the baseline might be an advantage to a school where children would do very well on it, and then only make expected progress, but still achieve high scores at the end of KS2, and the reverse if children would score very low on the baseline. Are schools now going to gamble whether to do these or not?

There are other serious issues leading to further uncertainty for schools. Almost all the recommended schemes are restricted mainly to language, literacy and mathematics and therefore progress and the school’s effectiveness would be based on a narrow view of what the aims of primary education is for. Five of the six chosen systems do not explicitly draw on parents’ and carers’ knowledge of their children and thus will be based on incomplete evidence. As TACTYC has pointed out, there are fundamental concerns about reliability and validity.

Comparisons between schools and overall judgements would be compromised when there are six different ways to measure the starting points of children in reception. It is inconsistent to allow schools to choose between six providers at baseline but only allow one choice at age 7 and 11.

Finally, as the first time progress will be measured from the baseline at age 11 will be in summer 2023 there will be at least two general elections before then. Will education policy in assessment remain static until then? On current experience that is highly unlikely.

Alongside this inconsistency and uncertainty about the reception baseline the government published its response to the consultations about the draft performance descriptors for the end of KS1 and KS2.

The responses were significantly more negative than positive with the vast majority of  respondents indicating that these descriptors were not good enough and would not be able to do the job they were designed to do. Indeed nearly half thought the descriptors were not fit for purpose.

At the same time, and no doubt as a result of the consultation, DFE announced an Assessment without Levels Commission with the remit of ‘supporting  primary and secondary schools with the transition to assessment without levels, identifying and sharing good practice in assessment’.

This is clearly to address the significant  uncertainty about ongoing and summative assessment at the end of key stages where schools continue to struggle to understand what DfE’s thinking actually is now that levels have been abolished.

Schools in England are in a cleft stick. Do they choose to do one of the baseline tests, which will take considerable time to administer one to one without knowing if it will be used in seven years time or be of use next week to help plan provision? Can they afford to wait for the assessment commission to recommend an approach to assessment without levels or do they get on with it and possibly end up with a system that doesn’t fit with what is recommended?

Thus to answer the question in this blog’s title, the answer on the basis of the evidence above is ‘Who knows?’

What a contrast to the situation in Wales where, also in February, Successful Futures, the review of the curriculum and assessment framework for Wales led by Professor Graham Donaldson, was published.

This states:

Phases and key stages should be removed in order to improve progression, and should be based on a well-grounded, nationally described continuum of learning that flows from when a child enters education through to the end of statutory schooling at 16 and beyond.

Learning will be less fragmented… and progression should be signaled through Progression Steps, rather than levels. Progression Steps will be described at five points in the learning continuum, relating broadly to expectations at ages 5, 8, 11, 14 and 16…. Each Progression Step should be viewed as a staging post for the educational development of every child, not a judgement.

What a sensible and coherent recommendation for assessment policy. Thus Wales may very well end up with a coherent, agreed, national framework for both mapping progress and judging attainment at specific ages within a broad understanding of the overall aims of education.

Perhaps it is not surprising that Professor Donaldson’s review drew significantly on the Cambridge Primary Review’s Final Report in coming to its conclusions.

Maybe England’s policy makers should too.

Assessment reform is a CPRT priority. For a round-up of CPR and CPRT evidence on assessment see our Priorities in Action page. This contains links to Wynne Harlen’s recent CPRT research review, relevant blogs, CPRT regional activities, CPR and CPRT evidence to government consultations on assessment, and the many CPR publications on this topic.

Filed under: assessment, Cambridge Primary Review Trust, David Reedy, England, Wales

December 19, 2014 by CPRT

DfE performance descriptors for KS1/2: CPRT’s response

DfE’s consultation on the proposed performance descriptors for KS1 and KS2 closed on 18 December.

Read CPRT’s response

Also:

Read CPRT news item launching discussion of the proposals

Read Warwick Mansell’s CPRT blog critiquing the proposals

Read David Reedy’s CPRT blog on testing and teaching

Read Wynne Harlen’s major new CPRT research review and briefing on assessment

Filed Under: Cambridge Primary Review Trust, David Reedy, DfE, performance descriptors, Warwick Mansell, Wynne Harlen

November 28, 2014 by CPRT

Consultation Reminders

There are three national consultations currently under way, and their deadlines are fast approaching. We encourage all CPRT members, supporters and affiliates to participate in these areas of debate. Here are some of the things to consider:

  • Performance descriptors (official deadline – 18 December)
    See recent blogs by David Reedy (on testing) and Warwick Mansell (on the new performance descriptors and assessment).
  • Policy priorities for 2015 general election (CPRT deadline – 19 December)
    See the blog by Robin Alexander (September 25th) about what we want for primary education, CPRT’s priorities, as well as CPR’s priorities for the 2010 general election.
  • House of Commons enquiry into the use of evidence (official deadline – 12 December)
    See Robin Alexander‘s blog from November 21st on the relationship between policy and evidence.

If you would like your ideas to be part of CPRT’s response to these consultations, send them to administrator@cprtrust.org.uk as soon as you can. In relation to the performance descriptors and House of Commons enquiry, we need your thoughts well in advance of the official deadlines.

October 10, 2014 by David Reedy

Teaching or testing: which matters more?

On 26th September the Schools Minister, Nick Gibb, was extremely pleased to announce that the results of the phonics check for 6 year olds in England had improved considerably: 18 per cent more children had reached the ‘expected standard’ in 2014 than in 2012 when the test was introduced. A government spokesman stated that ‘100,000 more children than in 2012 are on track to become excellent readers’.

As primary teachers are aware, the phonics check has become a high stakes test. School results are collated and analysed in depth through RAISEOnline and made available to Ofsted inspectors, who are explicitly told to consider these results as evidence of the effective teaching of early reading in the current framework for Ofsted inspections.

The CPR final report in 2010 pointed out that primary children in England were tested more frequently than in many other countries, including some that rank higher in the international performance league tables. Since then the difference has become even more marked. Further tests have been introduced – the phonics check and the introduction of a grammar strand in the tests for 11 year olds – with the intention of introducing a similar grammar strand for 7 year olds in 2016.

Politicians like Nick Gibb like to claim that tests like these raise standards, yet CPR found that the evidence of a causal relationship between tests and raised standards was at best oblique. It continues to be unconvincing. Scores in the tests rise, certainly. But what high stakes tests do is ‘force teachers, pupils and parents to concentrate their attention on those areas of learning to be tested, too often to the exclusion of much activity of considerable educational importance'(CPR final report, page 325).

This is particularly true of the phonics check with its 20 phonically-regular real words and 20 non- words to be decoded, with 80 per cent accuracy required if it is to be passed. Indeed, as Alice Bradbury points out, there is considerable disquiet that the check was introduced by politicians as a means of forcing teachers to change the way they teach early reading.

In his rather approving analysis of the test results David Waugh said, ‘I know many teachers who now concentrate a lot of time on teaching children how to read invented words to help them pass the test.’ This has been my experience too.

Thus the test promotes a distortion of reading development. Teachers in primary classrooms spend extra time on teaching children how to read made-up words, diminishing the time for reading real words and teaching the other strategies needed for accurate word reading (whole word recognition of irregular words, the use of context for words such as read, for example), let alone comprehension and the wider experience of different kinds of text.

Increased test scores do not infallibly demonstrate improved standards. Wynne Harlen confirms this in the forthcoming review of research on assessment and testing which CPRT has commissioned as one of its 2014 research updates of evidence cited by CPR (to be published shortly: watch this space). It is therefore hardly surprising that results of the phonics check have improved as teachers become familiar with the demands of the test and adapt their teaching in line with them. Yet here we have a test that undermines the curriculum and is unlikely to give any useful information about children’s reading development; a government which is committed to increasing the number of tests young children are subject to despite evidence of their negative effects; and an opposition that has given no indication that it will change this situation if elected in 2015.

In 2010 the Cambridge Primary Review cited assessment reform as one of its eleven post-election policy priorities for the incoming government. As we approach the 2015 election assessment reform remains, in my view, as urgent a priority as it was in 2010.

David Reedy, formerly Principal Primary Adviser in Barking and Dagenham LA, is a CPRT co-director and General Secretary of UKLA.

  • To find out how to contribute to the debate about primary education policy priorities for the 2015 general election, see Robin Alexander’s blog of 25 September.

Contact

Cambridge Primary Review Trust - Email: administrator@cprtrust.org.uk

Copyright © 2025