Metadata, Class Sizes and the Need For Some Slow Thinking!

“Worse, in the twenty-first century the massive technological changes that have vastly changed our society have had little effect on our schools; in too many places, the technology is merely being used as the next, best filmstrip, or worse, a better way to quiz and test our students, rather than as a way to open up our classroom windows and doors so that students can learn what they need to, create what they want, and expand the reach of their ideas to almost limitless bounds.”

Building Schools 2.0: How to Create the Schools We Need by Chris Lehman and Zac Chase

Every time someone releases figures about how Australia is performing in the education stakes, someone from the Coalition government uses it to justify their stance on education. So recently when the OECD’s snapshot of 46 countries came out recently it showed a number of things, but prinicipally it told us that Australia proportionally spends more on education than most developed nations. It also suggested that Australia went backwards on a number of indicators.

Now while going backwards may not necessarily be as disastrous as the media would have you think  or an indication of a complete failure, I’d certainly agree that we need to make sure that money going into education is well spent. However, it’s the idea that you can draw valid conclusions from looking at metadata that I find most frustrating. Metadata may indicate what you need to look at in more detail but one can’t draw definitive conclusions from metadata.

If we take class sizes as an example, then even if the metadata suggests that class sizes make no difference to outcomes, we can’t make that conclusion by looking at the respective performances of the various countries because we have more than one variable. Even my Year 10 Psychology students know that you can only test one variable at a time. If you want to make any accurate conclusions, you’d need to get classes from the same school and create a control group of the usual class size and compare to classes with significantly less and/or significantly more students.

If we really want to determine whether class sizes make no difference, then maybe we should organise an experiment where a school puts one group of students in a class of fifteen at the beginning of their school career and another group of students in a class of thirty and then tests them at the end of each year. While parents may be reluctant to put their child in the class of thirty, surely there’d be enough advocates of the idea that class size makes no difference who’d be happy to place their offspring in such a class.

To compare the results of students in different countries tells us nothing because there’ll be a whole range of possible reasons for superior performance on particular tests including attitudes to education, the number of non-native speakers in the population and whether the difference in class sizes has made any difference to the way schools structure the learning.

Part of the trouble with looking at metadata is best explained by looking at the work of Daniel Kahneman. As he points out in Thinking Fast and Slow, humans are often quick to reach a conclusion and then they use their rational brains to justify that conclusion, rather than questioning their original conclusion. So if politicians have been looking to cut education funding, then any suggestion that increased spending hasn’t led to amazing improvements in education is immediately confirmation of their idea that it’s “quality teaching that counts”, rather than a more detailed examination of whether there are areas where increased expenditure has improved outcomes.

Until metadata is broken down with some “slow thinking” it tells you nothing. For example, we can increase average income of everyone in a group and conclude that money doesn’t improve work satisfaction at all based on the results of a survey telling us that people were even less happy than they were the year before. It’s only when you dig deeper and discover that the average income was increased by giving a massive bonus to two people, while everyone else worked for less that we can begin to surmise that this may have led to the rise in dissatisfaction.

Another point being raised is that Australia’s increase in spending on technology hasn’t made a “significant”  difference to literacy and numeracy. While I suspect that a large part of the reason for that is that many teachers have only used the technology to do what they’ve always done – reading a text off a screen instead of on the page  is no more likely to increase literacy than moving from chalk to whiteboards – I’ve never thought that the reason that schools need to use technology is to improve the literacy and numeracy rates. Schools need to use technology because society uses technology. While I’m not advocating that schools’ role is to prepare for the workforce, we certainly don’t want a situation where a school-leaver walks into a job asking, “What’s Excel?” More than that, however, we need to be building student awareness of both the potential and drawbacks of technology.

For Australia, nobody should be drawing answers from the OECD results. All metadata really does is help form the questions.