Over the past month Australia’s national broadcaster, the ABC, aired a four-part documentary called Revolution School in which it followed the staff and students of Kambrya College in Victoria throughout the course of 2015. The premise of the doco was that Kambrya was a struggling school – in 2008 its Year 12 results put it in the state’s bottom 10% of schools – and that by applying “cutting edge research developed by Professor John Hattie at the University of Melbourne’s Graduate School of Education, [Kambrya] undergoes a dramatic transformation. Ultimately [Revolution School] is a lesson for all schools in Australia, identifying what they can do to improve standards at this critical time.”
It was set against the backdrop of an Australian education system that is, “letting down our kids and the nation” and that’s compounded by the ‘fact’ that most of the things educators and parents think matter in education actually don’t.
John Hattie states from the outset that reducing class size, private education and giving parents choice do not “make a difference to the quality of education.”
The show concluded this week and it’s fair to say it received a mixed response – from 43 people on Twitter at least.
Here, for what it’s worth, are my thoughts on it.
revolution
noun: revolution; plural noun: revolutions
a dramatic and wide-reaching change in conditions, attitudes, or operation.
I can only assume that this was a decision made by TV execs who needed something catchy, and to be fair it certainly caught my attention when I saw it advertised. I was genuinely curious to see what kind of revolution was taking place, but I was left feeling a little underwhelmed.
To be clear – I’m not dismissive of the efforts of the teachers, the consultants and – importantly – the students. Any school, teacher or student who works to improve standards is worthy of acknowledgement. From the outset I commend the staff and students of Kambrya for allowing the cameras in to give an insight into how a dedicated team of teachers (as well as the kids) can address some of the daily challenges faced in a typical school. For some watching it would have proven insightful.
Rather I was left underwhelmed because there was very little in the show that could be seen as being revolutionary. Whilst it might have documented a wide-reaching change in conditions, attitudes or operations at Kambrya, the claim that Revolution School would “serve as a lesson for all schools in Australia,” might be seen as a tad patronising.
It should be noted that Revolution School had at one time been titled Making The Grade, which in my opinion would have been a more appropriate (if less appealing to TV execs) title for the documentary.
Many of the strategies for improvement were a result of the school’s partnership with Melbourne Graduate School of Education (MGSE), with its impressive team of educators and experts. Teachers were able to work with the likes of John Hattie, Lea Waters, Bill Rogers and Di Snowball to refine their approaches in class and the wider community. However the coverage of these working relationships was superficial at best, limited to soundbites and the obligatory rounds of applause at the end of teacher PL sessions. Bill Rogers, for example is a legend in the realm of classroom climate and handling behaviour issues, but all we got from his appearance was the idea that you could chart on a whiteboard how on-task the class are over the course of a lesson. Now I appreciate that TV execs might not think that a more in-depth exploration of classroom climate would be compelling viewing, but if Revolution School really was going to “serve as a lesson for all schools in Australia” then these are the paths they needed to go down.
Similarly with regard to Lea Waters’ work on wellbeing, I cringed at the superficial nature in which it was presented to the TV audience, with no reference to what Positive Psychology is or the potential significance of character strengths.
However one soundbite I did enjoy was literacy expert Di Snowball’s reflection of the way in which many schools teach reading using literacy exercise books. In doing this kids only read small passages in isolation – something that was implied did little to encourage a love of reading and as Di pointed out:
What’s the point of improving reading through these programmes if you aren’t then reading?
John Hattie was a common voice throughout the series and in Episode 4 introduced the idea of teachers having their lessons transcribed live, and their words projected onto screens around the classroom. A programme from MGSE called “The Visible Classroom.” (As an aside, this is on top of Hattie’s Visible Learning approach, and Lea Waters’ Visible Wellbeing approach. I’m spotting a theme.) One of the main reasons for doing this is so that the data from the class can be evaluated by MGSE to ascertain how much a teacher talks and what kind of dialogue with regards to questions and interaction is happening. Hattie argues that teachers should be talking for around 50% of the time in class, but in reality most teachers spend 80-90% of the time talking. Assuming this premise is correct, the episode went on to show teachers who had in fact reduced their talk in class, but again tellingly for a series that was to “serve as a lesson for all schools in Australia,” it didn’t explore the changes in practice/planning that teachers undertook to make this happen. I’d imagine for some teachers these changes would have presented significant challenges, and it would have been good to see how these challenges were addressed.
Towards the end of the series the concept of Clinical Teaching (the approach taught at MGSE) was introduced as having “the potential to revolutionise our classrooms” but again I was – along with many teachers I’d expect – left underwhelmed. The core principles of Clinical Teaching as explained by Hattie are:
In other words… teach.
Look, I know I’m being a little flippant here, and I do know that this might seem revolutionary to some… but seriously, I recommend you read pretty much anything by Dylan Wiliam, because formative assessment has been addressing this stuff for yonks.
An outdoor ed camp, a class for disengaged boys, a kid who left under a cloud of drug use, another who left after Year 10, kids getting into strife for fighting and (alleged but ultimately unproven) theft, a school production of Aladdin, a formal ball, a kid who didn’t get into medicine, stressed out G&T kids, a girl arguing with her mother, school captain elections and a deputy who was a little skeptical of consultants… are the ingredients of a typical day in a typical school and I’m unsure as to what lessons they taught us, other than apparently after going on a four-day hike, some kids who hated school and had previously all failed a maths test all of a sudden aced it on their return… Again, not underestimating the impact of the outdoor education or the commitment of the staff, but come on… the superficial nature in which it was covered left me cold.
I was left wondering if police officers are equally underwhelmed after watching an episode of RBT? Or what about doctors and nurses after watching Trauma: Life in the ER?
But then again RBT & Trauma aren’t broadcast on the premise that the policing or health professions are failing to such an extent that they require such TV shows to “serve as a lesson.”
Now to be fair, the series finished by highlighting the gains the school has made which are impressive and for which everyone’s efforts should be applauded, and it’s worth noting that this has been an 8 year journey of which MGSE were a part for the past 12 months. Clearly they’ve made great gains but there are still some questions that remain particularly given the premise upon which the series was set.
Thank you for the great post Dan. I belong to the ‘highly underwhelmed’ camp in relation to the ‘Revolution School’ program for many of the reasons you cited.
Thanks for taking the time to read! 🙂 I appreciate it!
You raise a lot of good points Dan. I noticed the last graphic from the series about Y12 results was not correct, the published median score for Kambrya Y12 (out of 50) was in 2015 -29, 2014 -30, 2013 -30, 2012 -29. So results decreased a little in 2015. So the graphic gives the opposite impression.
The passion, enthusiasm, effort, care and patience of the teachers was clear. It seemed to me that these were the qualities that improved the kids, other things were secondary.
Ironically, these are the invisible things that Hattie avoids in his book Visible Learning. He admitted in 2012, that you can’t measure any of these in any reliable way.
Most schools have these sorts of teachers. I like you was looking for the innovation. The innovation i saw was not really new, most schools do these sorts of things but what interested me was what Hattie’s book Visible Learning says about them. For example, he constantly proclaims class-size has little/no impact on student learning, yet the difficult class of boy’s class-size was reduced to 18. The support staff (welfare, psychologist, etc) serve to effectively reduce class size further. I’ve read the 3 studies Hattie used to make this claim and he has misrepresented them – i have the details in my blog – http://visablelearning.blogspot.com.au
I’m really concerned about Hattie’s use of the effect size statistic and even more concerned about his interpretation that an educational influence with an effect size (d) < 0.40 indicates the kids are going backward in achievement. According to Hattie's book, many of the things i saw on the program have effect sizes < 0.40
Maths teacher told to make videos – audio/video d = 0.22
Daraby boys class reduced to 18 kids – class size d = 0.21
Daraby boys taken on outdoor-ed camp – co/team teaching d = 0.19, out of school d=.09, extra-curricula d=0.17
Psychologist brought in to teach well being, respect – Values/moral d = 0.24
Fulltime welfare worker -welfare d = -0.12 (NEGATIVE!!!!)
Use Restorative practice – decreasing disruptive behaviour d = 0.34
Reading programs – d = 0.36
Debutante ball – out of school d=.09, extra-curricula d=0.17
Home visitation – d = 0.29
Mentoring – d= 0.15
As Professor Dylan William, says, "… the effect sizes proposed by Hattie are, at least in the context of schooling, just plain wrong."
George, thank you for taking the time to comment, and your analysis certainly adds to the post. I hadn’t looked at the effect sizes in light of this post, but I may well take another look now.
Thanks again for taking the time to read and comment, it’s much appreciated!
Dan
G’day Dan the cost ($50-60,000) of Hattie & Co that you investigated is very relevant. The software on the program seems very similar to a program used in NZ called asTTle. A German Professor, Ewald Terhardt – in his 2011 review – ‘Has Hattie found the Holy Grail’, stated: “A part of the criticism on Hattie condemns his close links to the New Zealand Government and is suspicious of his own economic interests in the spread of his assessment and training programme (asTTle). Similarly, he is accused of advertising elements of performance-related pay of teachers and he is being criticized for the use of asTTle as the administrative tool for scaling teacher performance. His neglect of social backgrounds, inequality, racism, etc., and issues of school structure is also held against him” (p434).
It appears ASTTle has been met with a lot of resistance in NZ. It looks like it is now being released here in Australia and The ABC has done a nice free advertisement for it.
The conflict of interest seems significant.
Thanks for putting this all down so articulately, Dan! In the same evening I watched Most Likely to Succeed and then the first episode of Revolution School.. I wouldn’t recommend viewing the two productions in such close succession! I was really disappointed when I watched Episode One and was somewhat frustrated at the title of the show.. As you so eloquently point out, I can appreciate the impressive gains made by Kambrya but I wouldn’t call their strategies revolutionary. And it is frustrating because I come into contact with so many schools who ARE doing revolutionary things. Many use different, future-focussed measures of success.. I wish their stories were being celebrated so widely! I can only assume the TV producers gave the show the name ‘Revolution School’ because it has been turned around (revolution being an instance of revolving). In which case, I guess the title is widely misunderstood, possibly quite misleading, but technically correct.
Thanks Maddie for the comment! Yes, whilst I haven’t seen MLTS yes I’ve heard it makes for compelling viewing. Perhaps you’re right as well regarding the term “revolution.” I still reckon Making the Grade would have been a better one 🙂
Thanks again for reading and the comment 🙂
Dan
As a teacher I’m very thankful for Dan pointing out the questions that are being ignored.
Thanks MPR for reading and engaging! 🙂
Dan
I think #1, of the ‘questions that remain’, was what I was left contemplating. As much as it remains a question, I feel our teachers are our best asset in schools and time and money is best spent in supporting them professionally, to in turn best support the progress of our students (all of this lead by an intelligent ‘leader’). Am I on the right track? We are progressing with the Visible Learning pedagogy for our newly establish college, a good fit for us to establish a common learning language P-12. A practice I did take away from the show was having a fruit bowl in the referral room, great idea :). I’m going to have a look now at George’s blog, see what his view of Visible Learning is.
Hi Sam, thanks for the comment. I’d be interested what you make of the other reading about Visible Learning… Dylan Wiliam has added some interesting views in the comment below this one IMHO.
Cheers,
Dan
There are a number of important issues with respect to quoted effect sizes in education.
1. When someone quotes an effect size, is this in addition to “business as usual” or just the total growth of the specified time period (it is worth noting that Hattie uses both meanings in “Visible Learning”)? If a new way of teaching mathematics increases student achievement by 0.4 standard deviations in Year 4, then this is not particularly promising, because 0.4 standard deviations is the average progress made with “business as usual”. But if this 0.4 effect size is in addition to business as usual, then this intervention has just doubled the speed of learning. Indeed, an effect size of 0.2 would be impressive, since it would amount to a 50% increase in the rate of learning. In our work on formative assessment with teachers in England, my colleagues and I found an effect size of 0.32. If this had been the total growth in achievement over the year, then this would have been disappointing, but this figure was how much more progress the experimental groups made over the year compared with the control groups.
2. As an average figure for the annual progress made by students in school, Hattie’s estimate of 0.4 is reasonable. But it is important to bear in mind that the average progress depends on the age. For Kindergarten, one year’s progress is around 1.5 standard deviations, while in high school it is around 0.3. An effect size of 0.1 would be rather unimpressive if achieved with Kindergarten students, but a huge effect if achieved with high-school students.
3. Even small effect sizes can be important if relatively inexpensive to secure. A study by Benjamin York and Susanna Loeb looked at the value of sending text message to parents of pre-schoolers in New York to remind the parents to undertake literacy activities with their children. The effect sizes were only around 0.2 to 0.3, which is a rather small effect size with such young children, but since the cost of the programme was only around $10 per child, it is hugely cost-effective.
4. Effect sizes depend on how sensitive the assessments used to measure student progress are to the things that teachers are changing. In one study, Maria Ruiz-Primo and Min Li found that when the effects of feedback in STEM subjects was measured with tests that measured what the students had actually been learning, the effect sizes were five times greater than when achievement was measured with standardized tests. Which of these is the “correct” figure is obviously a matter of debate. The important point is that assessments differ in their sensitivity to the effects of teaching, which therefore affects effect sizes.
5. By one estimate, you are 12 times as likely to get a research paper published if the findings are statistically significant than if they are not. The problem is that most educational experiments are so small that even when the effects being researched are real, there is only around a 40% chance that the experiment will yield a statistically significant finding (this is what statisticians call the statistical power of the experiment). The ones that do get published, therefore, are those where, by chance, everything went well. The published studies are therefore a biased subset of the studies actually undertaken (this is called “The file drawer problem” in research, because that’s where the unpublished studies languish). It is possible, when doing a meta-analysis, to find out whether there is a “file drawer problem” by using what are called “funnel plots” (which allow researchers to quickly see whether there is a tendency for the smaller studies to produce larger effect sizes), and in this way it is possible to correct for publication bias. Unfortunately, most of the published meta-analyses in education do not do this.
6. Many published studies of educational interventions are conducted in laboratories, or in ideal settings, rather than in real classrooms. Averaging effect sizes across laboratory and classroom settings makes the results difficult to interpret.
It would be great if things were simple, but, in education research like in the rest of life, they are not. As H.L. Mencken once said, “There is always a well-known solution to every human problem—neat, plausible, and wrong.” For me, the important takeaway is that teachers and school leaders need to be critical consumers of research. Context matters.
Dylan, thank you very much for taking the time to read the post and the comments. Your contribution add another level of depth for which I’m grateful. On a personal note, I do enjoy your writing around formative assessment and share it whenever I have the opportunity!
Thanks again,
Dan
Thanks Dylan, the difference in teacher tests and standardised tests is interesting, i did not realise there was such a disparity. Relating cost to the intervention is also useful. Regarding average effect sizes, i still think using an average can be misleading. After, viewing the table of US benchmark effect sizes for each year level – http://visablelearning.blogspot.com.au/p/a-years-progress.html
(which seems to be the most reliable measure of students’ usual progression from one year to the next?) the very small effect size for Y11-12 maths of 0.01 stuck out. Given that Hattie often uses University students and even post graduates – e.g Doctors and Nurses in the Problem Based Learning studies; how relevant are these studies to High School or Primary School kids?
Thanks Dan. As someone who was also a little underwhelmed by the whole thing, you summed up my thoughts eloquently. The very questions you identify should be questions that all educators have as a result of watching the series. I think that Hattie’s all encompassing impact on education is something to be very cautious of despite strong support for aspects such as learning intentions, etc, especially given the way he uses research and particularly data. Anyone who knows about data and effect size understands that his list is simplistic. Anyone who teaches knows about the complexity of dis-aggregating the effects of different strategies and the difficulty of knowing exactly what the impact of particular strategies such as class size are. Despite this however, I have seen very few deep analysis of his conclusions. It seems many in education are ready for a messiah and Hattie has deigned himself to be that.
Hi Debbie, thanks for your comment. There is a bit out there about challenging Visible Learning…. there are a few links in my post https://danhaesler.com/2014/11/17/is-john-talking-through-his-hattie/ from a couple of years ago.
Thanks again!
Dan
I also found it frustrating that Hattie’s method of reflection / development / evaluation of teaching was presented as “the only show in town”. Those in NSW know of the Quality Teacihng Framework and its usage in relation to observation of teaching as a form of professional development (incidently, rounds using this framework mean PD for all staff involved – not just the staff member being observed).
And I have no doubt there are other variations on this across Australia. And many are not “for sale”.
George: the points you make about effect sizes are extensively discussed Chapter 3 of my most recent book, “Leadership for teacher learning” published in Australia by Hawker-Brownlow.
Thanks Dylan, look forward to reading the book
Pingback: Acting on Evidence – Dan Haesler
Pingback: Peer Reviews of Hattie’s Visible Learning (VL). – Site Title