Lies, Damned Lies and GCSE English Results

The exam results period is never short of controversy, each year there seems to be a new issue with marking, supposed ‘soft’ subjects, the BTEC v. GCSE debate…however, this year is different and not in a good way.

You would have had to have been in a cave for the past week to avoid all the news stories about the drop in GCSE English results, or, in my case, in Egypt. There have been other articles written about this, an excellent one by @RealGeoffBarton for example, many focusing on the AQA qualification. I am writing this post partly to get my own head around the situation from the perspective of someone teaching the OCR qualification, but also to cut through some of the media misunderstanding of the situation, and – let’s be brutally honest here – to prepare myself for the oncoming storm from SMT and parents.

So, there has been a drop in the number of students achieving a C grade GCSE  or higher in English – so much we know. A quick trawl of the Internet will show that this drop varies from school to school from a few percent to shocking figures of 16-20%. Following Gove’s repeated grandstanding (minus hard evidence I hasten to add) about ‘falling standards’ and claims from some parts of the press about how easy the qualifications were to pass (anyone heard the myth that if you write your name correctly you get a grade?), it was not surprising that there was going to be some fall out and that grades were likely to take a hit. However, as Orwell’s ‘Animal Farm’ suggests, all students and exams are equal, but some are more equal than others.

The press have reported in varying levels of accuracy and froth, the Daily Mail for example reported: “claims that pupils who took the exam in January found it easier to gain C grades than those who sat it in the summer”. Others reported exam boards explaining that the difference was due to the new syllabus. I hope state the case as I see it and explain, hopefully in layman’s terms, why this deflation of grades is unfair.

The New Syllabus

This summer marked the first cohort going through the new GCSE syllabus. The syllabus was introduced in September 2010 and included several changes to the previous exams – the introduction of ‘controlled assessment’, a type of coursework being completed under exam conditions, being the most notable one. Yes, you would expect a few teething problems as students, examiners and teachers get to grips with the changes, but these should be fairly minor as the core of English remains the same – reading, writing, speaking and listening.

The mark schemes for the new exam ‘controlled assessments’ changed, due to the insistence of the now-defunct QCDA (Qualifications and Curriculum Development Agency), descriptions of students’ skills being matched to bands rather than grades. This was a more complex change (something many of us are used to at A-Level), but the description of C grade skills for teachers experienced in marking C grade work, and guidance via exemplars from the board, meant that, although the boundaries were a little fuzzy in places, ultimately the skills and quality needed for a C grade were pretty much the same as under the old syllabus.

Obviously, it would be unfair if, because of an accident of birth, you needed to achieve a much higher range of skills in order to get the same C grade as previous years, wouldn’t it? Surely that is the point of the grading, and exams of this type, if you get a C you have X range of skills, so colleges and employers can compare applicants fairly? If this is not the case, why do we bother with the exams at all?

The Harder Summer Exam

Much of the press reporting focuses on the assertion that the summer exam was much harder than the January one. This may well be the case. I would not be surprised if the mark schemes were more stringent and that, where a student was previously given the benefit of the doubt, this was no longer the case. This does need to be investigated and, if it turns out to be the case that the exam was much harder than previous sessions, adjustments should be made to ensure that there is consistency and fairness.

As each exam series has a new paper, you expect there to be a little movement in the marks needed to achieve particular grades – we as teachers expect this, a really tricky paper will generally have a lower grade threshold than an easier one to ensure parity. We expect the grade boundary for a paper to shift by a few marks – that is fair. What seems unfair is a sudden shift of grades by 10 marks or more, suggesting that it is harder to achieve a C grade than before – as I said above, if a summer 2012 C grade is not the same as a Jan 2012, summer 2011 or 2008 grade, then the whole point of assessing students via GCSE exams is flawed and unfair.

 The Controlled Assessment

This is the element marked in school and samples sent to the exam board for moderation. Typically, 60% of the GCSE is made up of internal assessment and 40% by the exam. The exam boards set the tasks, we teach the skills and content and the students complete the task under exam conditions. We can’t mark drafts or give feedback on the piece until it has been completed. The teacher then marks the assessment piece using the mark scheme provided by the exam board, this is split into bands and marks, not grades. We send these marks to the exam board, as well as an estimated grade (based on our professional judgement and previous experience). So far, so good.

I teach the OCR course for GCSE English Language, I have been to the board training sessions, we have moderated the work as a department, we have sent off our sample and (post-results) received confirmation that there has been “no adjustment” needed, that is that we have applied the exam board mark scheme accurately – matching work to bands and the relevant marks. So no problem there…well, yes! The controlled assessment is a huge, and, what I think is, a key part of the unfairness of this summer’s exam results.

This is where it gets a bit technical. Each exam series, the boards produce a list of grade boundaries for the marks awarded for each module, these ‘raw’ marks are then converted to UMS (Uniform Mark Scale) – this allows for adjustments to the boundaries, for example the differences in exam papers I mentioned above. While the public exam boundary may change within the same series, the controlled assessment boundaries should not (although they may change slightly from year to year) as the tasks are the same, can be completed at any point over the two years and are marked using the same mark scheme. The only difference with the controlled assessment is that the marks could be submitted to the exam board in either January or May (depending on which elements of the course were being counted towards the 40% terminal rule).

Following me so far? Good.

The controlled assessment tasks and mark schemes have not changed over the two-year course, so there is no variation in content that we might see in the external exam. Our moderator reports (and, I expect, many other schools’) state that there is ‘No Adjustment’ to either CA unit, so they agree our marks and our application of the mark scheme. As the qualification is criterion referenced, the grade equivalent for the mark awarded for the CA units (certainly within a single cohort) should not change – if they do, it suggests that C grades from different years and sessions are not actually the same which is obviously unfair and makes the whole exam system a farce.

Ah, I hear some of you say, the boundaries have been changed to avoid ‘dumbing down’, to increase ‘challenge’, to make the exams ‘harder’. Ok, so, if that is the case then surely we will see a similar increase in the boundaries for all grades?

No! The changes to the marks are not equitable, they hit the C/D and lower grades rather than the B-A*. Across the two English Language CA units (A651 and A652) the difference in marks (for OCR) to achieve a particular grade are as follows:

  • A* – 1 mark less than January
  • A – the same as January
  • B – 4 marks more
  • C – 8 marks more
  • D – 8 marks more
  • E – 9 marks more
  • F – 9 marks more
  • G – 9 marks more

I suggest that this is a political move, as if it were about rigour and ensuring challenge then surely all grades should have been affected? It seems to suggest that those in selective or high achieving schools (hmm, the children of many of our politicians perhaps?) are less likely to be affected. Perhaps the powers that be don’t wish to upset their privileged friends? Those students who are most in need of the C grade for college, apprenticeships or jobs, who need a good education to improve their chances in life seem to be the target of this change. It smacks of social engineering at its worst. This is unfair.

The second issue is the change within the same exam series depending on when the CA marks were submitted – if we had submitted the controlled assessments in January  the same pieces of work, by the same students, marked by us and given the same raw mark which was agreed by the exam board, were worth up to 9 marks less if we submitted in the summer rather than January. This unfairly penalizes  students, as in other schools, those who had the same or slightly lower raw score would have been awarded a higher mark if they were entered in January. The scaling for UMS makes this even bigger, so some students are 10 or 11 UMS worse off.

 What Should Be Done?

Firstly, the summer exam for all English exams should be reviewed to check that those who sat it in the summer were not unfairly penalised due to political pressure. Basically, would a response in the summer exam have achieved a higher grade in January? If so, they should be amended.

Secondly, the grade boundaries from January for the controlled assessment grades should be applied to the summer entry – ensuring that all students have been treated equally.

Finally, an urgent review of the whole situation with clear recommendations in plenty of time to avoid this situation next year. I am not advocating ‘giving’ students grades they don’t deserve, but be fair. If the exams are too easy – make them harder for all. Tell us what each grade is and give us examples to illustrate it – that will make it clear for everyone. Otherwise we are in the bizarre situation, to use an Olympic analogy, of a high jump final where no one knows how high you need to jump to win or even qualify.

I will be following Ofqual’s investigation and the outcome very closely.

—————————————————————————————————————-

Ofqual’s less that fab report here.

Leave a Reply

Your email address will not be published. Required fields are marked *