Four years ago I wrote a piece for the UK Huffington Post reflecting on the nonsense being espoused by the then head of OFSTED, Sir Michael Wilshaw. Upon his appointment as Chief Inspector of Schools he dispensed this advice to UK headmasters:
“A good head would never be loved by his or her staff. If anyone says to you that ‘staff morale is at an all-time low’ you know you are doing something right.”
In the same piece I noted that whilst now living in Australia, it was prudent to keep an eye on UK education matters, as more often than not, Australia adopts education strategies and policies borne out of the UK – albeit with a significant time lag – for example, standardised testing, national curriculum etc.
And now it seems Australia is at it again.
News broke this week that the NSW Board of Studies, Teaching and Educational Standards (BOSTES) will now be known as the NSW Education Standards Authority and will be given even more power to lift school compliance and teacher quality.
In a Sydney Morning Herald article, NSW Education Minister – the usually sensible – Adrian Piccoli said,
“The board ought to make schools nervous around school registration requirements, and it ought to make teachers nervous around teaching standards.”
Why the minister would want to add to the stresses already at play in schools is beyond me. For example, in Australia, school principals are five times more likely to face threats of violence than the general population, and seven times more likely to face physical violence whilst statistics show that early career teachers leave the profession at alarming rates. I can only presume he has taken advice that suggests instilling fear into already-stressed individuals and organisations is best for lifting outcomes. (I’m yet to read any research that suggests this is the case… but hey-ho).
And how will – do you think – the minister and the NSW Education Standards Authority determine whether these nervous teachers have improved? What targets will be set? Go on… I bet you CAN guess…
Take it away Tom…
In that same SMH article, Tom Alegounarias, who will become the part-time chair with a chief executive beneath him in the new structure, cited the highest achieving education jurisdictions globally as a target for NSW.
“It’s about setting our targets against international standards. How do we get to Shanghai, how do we get to Finland?”
Clearly I can’t miss the opportunity to suggest to Tom that the best way to get to either Shanghai or Finland would be by plane – boom-tish! (I’m here all week!)
But I have written before as to why we shouldn’t be overly smitten with China’s approaches to education (seriously… cigarette companies sponsor schools) or uncritically fawn over Finland (for example, youth unemployment is double that of Australia).
Unsurprisingly, Alegounarias also suggested that the reform would be deemed a success if there was “a big bump” in the state’s NAPLAN results in the next few years. This reductionist approach is concerning given that it has actually been suggested that such a “bump” would prove nothing. In case you don’t want to read that article in full, here is a very important section of it… (italics indicate direct quote from the article and I’ve added bold to the bits I think are really important).
Margaret Wu states that the fluctuation in NAPLAN scores can be as much as ± 5.2. This is because of a standard error of measurement of about 2.6 standard deviations.
This means there is a 95% confidence that if the same students were to complete the same test again (without new learning between tests) the results would vary by as much as ± 5.2 (2.6 x 2) of the original score. This represents nearly 12% variability for each individual score.
The standard error of measurement depends on the test reliability, meaning the capacity of the test to produce consistent and robust results.
What some researchers say is that the NAPLAN test’s large margin for errors makes the comparison across years inaccurate.
For example, if a student gets 74% in a test and another gets 70% and the error is 5, that means that essentially the first mark is 74 + or – 5, and the other mark is 70% + or – 5.
This means the two different marks can overlap by a fair bit. So it is not really possible to say a score of 74 is that much different to a score of 70.
The implication is that when you take this into account over a whole cohort of people it is difficult to sat (sic) categorically that one set of marks is any different compared with another.
Teachers and principals should not be judged based on NAPLAN findings and, as others have argued, more formative (assessment during learning) rather than summative (assessment at the end of a learning cycle) measures for providing teaching and learning feedback should be explored.
What concerns me most is this stuff about NAPLAN – as well as research around teacher wellbeing – isn’t written on a scroll hidden inside a booby-trapped tomb within the grounds of a mythical city that no-one can find… it’s on the inter-web-thingamajig… and I’m pretty sure that most government buildings would have access to that. And before people counter with research that suggests the opposite – that teachers are lucky to have the job they have and could use a little more stress in their lives, and that NAPLAN rocks – I’m only putting forward the links here by way of adding to the conversation.
Too many arguments in education are based around all-or-nothing binaries, and people are quick to jump into one camp or another and attach a hashtag. But I reckon the solutions might a little more nuanced than that.
But nuance does not a vote winning catch cry make, or a feel good movement create…
To understand more of the nuance, the government could ask teachers what they think (like I did on Twitter) – click the tweet to see the discussion that follows…
But then again, open discussion with the profession might make politicians nervous.
The best way to improve standards in education is to make teachers and schools nervous.
— Dan Haesler (@danhaesler) August 20, 2016