Navbar button The Headteacher

This Year’s SATs – What Have We Learnt?

July 9, 2019, 9:44 GMT+1
Read in about 9 minutes
  • Are we expecting this year’s KS2 SATs to tell us more than is reasonable? And have the nature of the test questions subtly shifted? Two Y6 teachers give us their take…
This Year’s SATs – What Have We Learnt?

“Hey, secondary teachers – leave our SATs alone!”

SATs came and went this year in much the same way as usual. As a Y6 teacher, I’m not going to pretend I didn’t feel anxious – I did – because doing well matters to the school, and it matters to me. The children, on the other hand, were very relaxed, confident and did the best they could without any fuss. I was as proud of them as I am every other week.

In amongst the usual SATs chitchat on EduTwitter there was one comment that really stood out to me, from a secondary teacher complaining that SATs results were inflated and resulted in children being set unobtainable targets across GCSE subjects. My initial reaction was to bristle at the word ‘inflated’ – one which clearly insinuates a purposeful exaggeration of attainment, something deceitful and underhand; and worse still in this context, something that didn’t serve the children we teach.

Since the tests are done in strictly controlled and monitored circumstances, against unseen, externally marked papers, the charge of inflated results doesn’t really seem fair. Upon further investigation, it seemed the teacher’s comments were based on the fact that many children are given booster sessions in small groups and revise topics in the run-up to the tests, and that as such, their results aren’t a true reflection of what they can achieve without this support.

In one sense, I’m guilty as charged – less with respect to small group work, what with budgets stretched as they are, but I do raise my hands to revising key topics after Easter and doing some preparation work in taking the test. Who wouldn’t? Certainly not a secondary school sending borderline pupils into their GSCEs. Are those results ‘inflated’ too? And what if children exceed their SATs-based targets at GCSE – do those same schools shake their heads at primaries ‘deflating’ their SATs results? I think not.

On reflection, perhaps the real issue isn’t whether the results are inflated, but whether the score a child gets in their SATs is a reliable base from which to predict their history or geography GCSE grade five years hence.

SATs are, at best, a snapshot in time. They can’t possibly predict the kaleidoscope of experience that will happen over the following years and the many factors that we know influence a child’s learning, both in and out of school. They can only accurately represent what happened during a one-hour test window on a single morning in May. In most cases, I find they generally represent the attainment I’ve seen across the year – but they cannot, and must not, be used to predict what will happen in the future. We owe children more than that.

Of course we prepare children for taking their SATs. They’re a high stakes accountability measure that require 10- and 11-year-olds to demonstrate what they’ve learnt over a four-year period in a very short window of time. Within that scenario, I’m really not sure what else primaries are expected to do.

During transition meetings I’ve been surprised by the minimal amount of information requested by secondary schools. Perhaps if they tapped into the wealth of non-statistical information we have about children, came in and looked at their books even, then they’d soon see that there’s a much richer basis on which they can formulate their targets. For now, however, that remains a distant dream!

Lucy Starbuck Braidley is a primary school teacher and subject leader for English and PE

“No teacher could have directly prepared children for questions like this”

In the immediate aftermath of the KS2 SATs, my colleagues seemed to think the questions were fair – broadly similar, if not slightly harder than the previous year. Once the marks are totalled and the questions are evened out, they may well be correct. However, these SATs papers saw a change in the nature of the questioning that provided a greater cognitive challenge. To answer such questions correctly in future will require an evolution of the way we teach the children.

Note: at the time of writing it’s not been possible to revisit the papers – the examples cited here are done so from memory

Reasoning and arithmetic

To me, the biggest shift compared to previous years could be seen in both maths reasoning papers, which introduced questions more akin to Mensa puzzles than anything directly attributable to the maths curriculum.

The most obvious example was a question requiring children to make two cuts along gridlines to form two differently sized squares and a rectangle. While invigilating, I saw numerous children spend around five minutes of their test time puzzling out this one mark question. I’ve tested the same question on multiple adults since, and few were able to solve it any quicker. Many children – who had expected to answer everything on the paper correctly – couldn’t visualise what the question was asking and subsequently ran out of time at the end of the paper.

This question, and several others like it, required children to be practised in taking time to notice things and playing with questions; with taking an indirect approach towards a solution. There’s no way in which a teacher could have directly prepared children for questions like this. If, in future, we can expect to see children facing more SATs problems that are visual, and which surprise them with contexts they’re unfamiliar with, then the teaching they receive will have to change.

A number of colleagues told me that the arithmetic paper went largely as expected. Personally, however, I found that it seemed designed to produce errors, rather than test what the children could do. ‘101 x 1000’ produces an answer resembling binary code – let’s see how many children left a ‘1’ or a ‘0’ out of place in that otherwise simple question.

Grammar and reading

The biggest difference with this year’s grammar, punctuation and spelling paper was the number of questions requiring children to write out their answers.

Multiple choice questions enable children to see the correct answer in front of them and be reminded of the relevant grammatical term; this year they had to write their answers from scratch more often than they have in the past.

This year’s reading paper, meanwhile, was the usual tale of three texts. The first, about the closure of a playground, was easily relatable to the children. They understood it and the questions were sensible. The third was a densely written – and frankly dull – short story. My impression was that 40 minutes into the test, few would have been able to adequately concentrate on this final section, including me! It seems that teaching stamina for reading in future may well be essential.

The second text was a non-fiction piece about bees with a highly structured layout, clearly signposting where children would find the answers. This should have made things easy, yet there were few classic ‘find and retrieve’ questions.

Instead, there were multiple questions requiring a degree of inference or prior knowledge of bees and plants. My EAL children, who usually thrive in the non-fiction section, were flummoxed. My immediate concern was whether they had done well enough in the first section to carry them on to a passing score – time will tell.

My conclusion? I believe very few children will achieve the kind of scores recorded when the 2018 paper was taken, and am therefore preparing the leadership team at my school for this year’s data to be disappointing…

Louis Walker is a primary school teacher based in Essex.