Sebatian Thrun in today’s Guardian argues that..well.He argues a lot of things, according to Jemima Kiss.
Here’s some of them.
He argues that education should learn a lot from gaming. This is, at best, contentious, and probably, in many respects, demonstrably wrong. The literature, the data, and the evidence – such as it is – seems to clearly come down against aspects of this. You get the same results, or better, from more conventional teaching techniques, and it costs you a whole lot less. Here’s a quote from the US military (on games, not on simulaitons), assessing 40 years of data. “…the research shows no instructional advantages of games over the other instructional approaches (such as lectures)”.
When a method is failing to beat lectures as a delivery vector for information, it’s in trouble. If we take transmission teaching as a baseline that should be beat, and there’s a good argument here for that, games don’t beat it. And cost more.
RE Clark has a good take on the metastiudies here “All of the different reviews currently available have reached almost identical conclusions….people who play serious games often learn how to play the game and some factual knowledge related to the game – but there is no evidence that games teach anyone anything that could be learned some other, less expensive and more effective way.” and “there is no compelling evidemce to suggest that serious games lead to greater motivation to learn than other instructional programs”.
The evidence, so far, seems clear here. Games are, at best, no more effective than cheaper alternatives, and are at times worse. Clouding the debate is the unreliability, problematic methodology, or lack of quantitative or qualititative data in the literature. This is an issue in the literature about education. And it’s well known that the reliability of a studies findings is related to it’s mthodological rigour. Simply put, badly designed studies are more likely to confirm the biases and premises of their designers than well designed one. A bad study is more likely to say what you want it to than a good one. In the article above, out of 4000 articles, reviewers found only 19 that assessed quantitative or qualitative data. 19. Astounding.
The review authors (Chen and O’Neill, Symposium paper, Training Effectiveness of a Computer Game, April 2005) argue that where positive outcomes were found, they were related not to gaming itself, but to the instructional design – the occasional use of standard and traditional instructional modes. They also note numerous claims for efficacy based on no evidence.
Thrun argues that the biggest principle is to go at your own speed. In fairness to him, the article doesn;t go into detail here. But still. There are problems with this. It looks like the most mental effort is deployed when students are a) confident in the general area of study b) not confident in the specific area of study c) not encountering too much new information d)
Throw in on top of that the problems with the Behaviorist gamification of education ideas – your students will chase the prize, not the process. If they can game the system and short circuit the reward system they will. If they were already motivated by the process, and you gamify them, they will possibly cease to be motivated by the process any longer. Gamifying has limited reproducability, and the lure of the reward diminishes.
We can bin the game based learning as panacea project.
Thrun argues that working at our own pace is the main principle we should be aware of. The article doesn’t go into detail here. And it should. But let’s plough ahead anyway, and see what we come up with.
There’s a couple os issues here. How we define what your own pace is (and who defines it) and what the relationship between p[erceived difficulty and motivation is. Let’s look at the latter first. We know that people tend to work hardest when they a) feel confident in the general area of study they are engaged in b) feel less confidence in the specific area they are studying c) have a sense that what they are doing is not too easy d) have a sense that what they are doing is not impossible e) feel the environment they are working in will alloow them to achieve f) the challenge level is related to the amount of prior knowledge they have.
We should also take into account that a respoected figure – an instructor or respected peer – can temporarilty raise the bar on what we think is impossible, allowing us to deploy even more effort at even harder tasks, as lomng as we achieve those tasks in the end.
So, we also need encouarging figures who are respected, and have carefully calibrated the task at hand to hit that ever narrower sweet spot.
So. We know that people need a degree of challenge, a significant degree of challenge, to deploy the most effort, and that degree of challenge needs to stop short of the impossible. We need to put them in environments that they think will either allow or help them to achieve, or, at least, won;t be obstacles. And we need to know what they know so we can tailor the challenge level. Thrun might mean this. But I don’t feel that he does. The above is subtle, difficult, hard to hit, requires careful design, feedback and monitoring.
In addition there’s the first idea – who defines your pace. Another aspect of maximising effort is using a medium that you perceive as difficult (if it’s well designed, it shouldn;t actually be difficult, but the perception is key). When we use media that we perceive as difficult (for some people books) we deploy more effort. When we use media we perceive as easy (for some people, online learning is perceived as easier) we deploy less effort. And we tend to choose the media we think will be easier. This shifts with the individual, and the culture. People who use audio visual a lot may actually perceive book based learning to be easier. And vice versa. What this measn for Thrun’s idea is that, well, the individual may not actually be a good guage of their own pace. Add in the idea that individuals are not actually particularly good, often at guaging their own learning ( individuals assessments of learning versus what they test at are often at odds) and you have a much more complex interaction than Thrun suggests.
I’ve just booted up in #learnmoodle, a MOOC, for Moodle Beginners.
It’s very badgey. Which will be interesting, as I haven’t done a badge based course yet. I’m not a badge earner, personally, but I’m really curious about the experience.
I’m engaging with the opening seminar, covering the pedfagogical background to Moodloe. Social Constructivism, Learning by making, and that we learn by obeserving peer activity, constructing meaning ourselves, and creating artefacts.
This, it’s pretty well known, is only partially true. Perhaps. In certain circumstances. It;s the standard Cinstructivist set of arguemtns that a ) never seem to refer to data and b) always refer to theories, and not evidence.
It looks like it’s liklely that advanced learners may benefitr more from Socially Constrsuctivist Practice, and that novice and intermediate learners benefit more from instructivist approached, with direct instruction.
I admire Moodle. But, as with any pedagogy, the broad base of learners are better supported by a broad base of pedgagogies. And ones that take into account Prior Knowledge, and what that means for the student, are key.
The initial seminar had some great points. Meeting the Moodle Team in situ is cool, the seminar presenter was late to start, becasue of a house move. And his discussion of tbis was really disarming. He was flkuent, fluid, and unflustered, and, even though late, his take on telling us added to the presentation.
It did however, confirm in me my hatred of slides. Nothing personal here, but I get so little from slide technology that covers large amounts of material, and so much more from interaction, that, well, I need to work out ways to stitch together the best from both experiences.
Slides do have advantages. Structure, reproducability (but this is typically a limited avantage in many cases).
Tips for me:
I noted in the slides that the screenshots are not that good for me. I have difficulty with forms, new interfaces, and fuind them difficult to navigate. So, for me, using annotated screenshots, or live screenstreams, are good. The seminar used some, and at times didn’t.
The course descriptors are laid out, and you can scoop down trhough it and skip ahead or back based on need, or scoop up the badges that are new to you. So you can snag week four on day one if that’s what you need to do.