For almost two years, we have been experimenting with writing assessments and trying to make them more meaningful. We have gone from criterion scales to best fit, to NC objectives, to nothing, to comparative judgement. And now, here we are at yet another attempt to find the balance between workload and the usefulness of primary writing assessments. We haven’t even started our new idea yet, but I would welcome suggestions and potential pitfalls along the way. Hence, I’ll be sharing our idea in Part 2.
I know that many colleagues are in the midst of trialling CJ. I have written a few blogs about our experience here. I have also been in contact with several schools over setting it up as well as some of the things you can do with it (I think I’m right in saying we were one of the first to try out anchoring for showing progress). More recently, I have been asked by a few other schools if we would like to do cross-school moderation/assessment judgements with them. We aren’t using online CJ this year, but here’s a few reasons why you should consider it:
+ CPD – all staff get to see a whole lot of writing in the school
+ Can provide a ‘flavour’ of what writing is like throughout the school
+ Reliability score – no disputing the numbers or agreements (typically around 0.8)
+ Speed when moderating (this can’t be emphasized enough)
As you can see, we found many positives using online CJ. However, after we had discussed the outcomes. Shared the scripts with both staff and pupils. Explored the rankings. Examined the graphs. Looked at the extremities. Used the feedback from staff to identify next steps in writing for the school. Shared data with governors. You know what we found out? Nothing new. Our children aren’t great at grammar (which we knew). Lots carn’t spel (we knew this). Some children are more creative or have a stronger writing voice (which we knew) and some are a little more r-o-b-o-t-i-c (which we knew). A few can’t help but can’t help but repeat themselves (knew). Some have no ideas for story writing (we knew – they are usually the reluctant readers). Hardly any could make their colons the correct size! 😉
The issue with the way we did it, in hindsight (damn you, hindsight!) was that teachers didn’t know which scripts they were marking. ‘Duh, that’s the point!’ I hear you say. But actually, is it? Isn’t the point of assessment, for teachers, to find out about their class: what they can and can’t do? By setting up CJ the way we did, we removed this crucial aspect. So the staff went away with generic whole-school issues, still being blissfully unaware of what their class were capable of without reading through all of the scripts again (which defeats the purpose in the first place, right?). Let me emphasize, that this is our findings. I’m sure there are schools out there that have used CJ far more effectively than we did and I am very much watching this space.
It’s interesting to read that NNM (who have been great, by the way) are aware of some of my current feelings towards (writing) assessment:
Of course, you could argue that the traditional teacher assessment also provides pupils with regular useful feedback, whereas comparative judgement is just providing an intermittent grade. We’ll deal with this point more in future posts, but for now, briefly, we’d argue that the feedback pupils get from traditional assessment, often in the form of a written comment taken from the frameworks, is not actually that helpful.
In the cold light of day, it all boils down to (idioms galore): why are we doing this? Really, though. Think about your staff. Do they really need to go through *insert whatever form of writing assessment you currently do* to have a good understanding of what to teach next? Does the impact from said assessment process warrant the time it takes? What about measuring progress? I’ll leave that one for James Pembroke to hammer home, here http://sigplus.blogspot.co.uk/2015/04/the-progress-myth.html and here http://sigplus.blogspot.co.uk/2016/06/the-progress-myth-revisited.html.
Why am I writing this blog? Well, I have an itch in the form of writing assessment at the moment. It’s the annoying itch in the middle of your back where, try as you might, you can’t quite scratch the right spot. I still love CJ and we are incorporating it into our new method of assessing writing, which I will be sharing in Part 2. But perhaps our school has missed a trick with it. And I don’t want to miss it. So if you have read this and are sat at your screen thinking, ‘Duh! What about x, y or z.’ I’d love to hear it. After all, we’re all in the same boat and it would be great to hear others’ views and ideas to develop assessment together, assidere style!