So a few weeks ago, I blogged, somewhat controversially, on the Sharing Standards Experience. It did ruffle a few feathers but in a way, I’m glad. NoMoreMarking has made great steps in bringing to the assessment table something that could really help teachers. Refining the test conditions, presentation of scripts and the user interface is still, I believe, work in progress.
I’d like to share our spring data with you. Just to remind you that we were very specific in our test conditions:
- Children from 1-6 wrote a narrative from a choice of 3 images in autumn term
- Children from 1-5 wrote a diary entry from a choice of 3 stimuli in spring (Y6 were involved in Sharing Standards so )
- No prior teaching, modelling or sharing similar pieces of writing
- No pooling of ideas or any guidance was given.
- No redrafting or feedback was given
- Each child completed the writing during the morning session
In the autumn term we had identified narrative as an area of focus for our school. Consequently this was the task for the autumn CJ. Our assistant head, Carl Badger, then led training on improving narrative writing. We could have used CJ again to compare the before and after to measure the impact of the intervention. This would be one good use of CJ to measure progress over a specific area. But quite frankly, it wasn’t necessary. We didn’t need an assessment to see which children had or hadn’t improved. This was evidenced in books. Most had, some hadn’t.
So in spring we decided to give the children a diary entry as we wanted to have a look at non-fiction. Diaries can incorporate elements of narrative so although it isn’t ideal to compare the two, the text types aren’t a world apart.
So here it is, in all its glory!
Thank you Chris @nomoremarking for producing this for me.
Overall we can see ‘progress’. I use this tentatively as we are only comparing two pieces of writing. We realise this isn’t enough to make firm conclusions about the learning across the school. Nevertheless it’s interesting! In case you don’t know, the dots are the extremes. On the face of it, Year 4 have made the most progress with Year 2 showing some rather peculiar (and extreme) outcomes for that task. They’ve obviously been untaught and the Year 2 teacher needs to go!
Because the top two scripts have a lot of personal information in, I cannot share them but here’s the highest script from Y4 (ranked 3rd overall) – a true reflection of how this child writes, completely independently:
Even with this highly ranked script, it’s simple to pick out next steps for this child (language, paragraphing, punctuation range e.g. ‘four-sided’, handwriting, to name a few).
We used anchors from the autumn assessment to compare the spring scripts on a scaled score. This was the highest ranked script overall from the autumn (Y6 pupil) that became one of the anchors.
We used a total of 8 anchors.
We have already begun trialling direct instruction to improve writing standards and will use CJ to compare prior to, and then at the conclusion of the intervention. I will be sharing the results of this.
At the end of this year we will choose another piece of writing to compare as part of our ‘testing the waters’ with a view to using CJ to compare portfolios in the next academic year. I’m still not convinced about trying to compare 6 different pieces of writing on screen all at once using CJ. We will either have set forms/genres that appear in the same order on-screen or will judge pieces 1 vs 1 and produce some sort of average.
Food for thought
School leaders and governors need to be able to talk about pupil groups, slow learners etc. as well as how well staff are performing. With this in mind, here’s a few questions our school is currently considering:
- As with any assessment, for what purpose are you using CJ?
- Who is this assessment for?
- Does it reveal anything we don’t know already?
- Will doing this assessment have a positive impact on learning (considering workload/time etc.)?
I have always maintained that I like CJ, and like all assessment systems, careful consideration should be given to why it is being used, how much it actually informs us, and the impact it will have on standards.