Comparative Judgement: The Sharing Standards Experience

Those who read my 2016 blog on CJ will know how interested I am in using CJ to improve our assessment in writing. Like other schools, we took part in the KS2 ‘Sharing Standards’ project. We were really pleased to be in a project where schools across the UK shared their Year 6 writing and have high hopes that it will be a vehicle for driving assessment strategies forward.

The difference to our first two internal writing assessments we have made using CJ was that we only assessed 30 children. However, the 30 children assessed had a portfolio (3 different pieces of writing in genre and form) to compare against. Whilst I understand the principle behind the thinking (a range of writing needs to be compared, not just one), for us it didn’t work as well. Here’s why…

 

Apples and Oranges

The first issue was that we weren’t comparing like for like. Some children had written a story, diary and report whilst others had submitted instructions, story and biographies. Even in the Mars and football example Chris Weadon often refers to, a report was compared to a story. Now whilst some may argue this shouldn’t matter, I would question how much prior information the ‘Mars’ child was given in order to write the report. Furthermore, when the scripts appeared on screen, they weren’t in any particular order of form or genre. Some had completely different genres. Both of which made it very difficult to compare.

Rather than:

cj colour shades 1

We were doing:

cj colours shades 2

Staff had to scroll back and forth to try and make comparisons from the sets of work. I could see the look of frustration on their faces. It wasn’t as quick as it should be. There wasn’t the same buzz in the room we had had previously. There was less discussion. It was hard. Overall, it was a completely different experience for staff compared to when we had used CJ before, and not in a good way.

 

Recommendations

In future, I would suggest that the scripts from each school were organised on the screen so that they appeared in a set order:

  • script 1, diary entry
  • script 2, newspaper report
  • script 3, story

This would make judging them through comparison much easier.

Another thought is to hold separate sessions so that genres/forms are ranked individually. So session 1: diaries; session 2: newspapers and session 3: stories. The ranks could then be averaged out to show the mean rank. I’m not amazing with data and there is probably a huge flaw in doing this but just putting it out there!

 

Final thought 

I still think CJ is good and it’s still in its (primary writing assessment) infancy. With more consideration it could be great. But, please, let’s not try and compare apples and oranges.