I’m going to start with my premises. I’ve tried all sorts of rhetorical tricks to fancy up the prose, but I think this needs to be direct and clear. This is, in part, inspired by projects I’m working on ( Social Media OER’s for educators, resources for twitter teaching, seminars on Digital Literacy, Cognitive Load concerns for Social Media novices and in designing for Instruction). Fundamentally, though, it’s an evidence based account of how to technologically innovate in institutions.
It’s also, in part inspired by a recent #Icollab chat, and my reflections on how the participants – many of them educators who are social media novices – enthusiastically, and publically engaged with a new technology, to ask themselves and others questions about how they could engage with and utilise the tools in their own practices.
It;s evidence based, it’s also, to me intuitive, and in line with my experiences, as a student and as a teacher.
Educators tend to be motivated, and geared to innovate. This is not universal, but is pronounced.
Students tend not to lead technological innovation in classrooms. (Dahlstrom et al, Margaryan et al)
Students tend to look to their teacher to show them how and why they should use technology in their learning. (Dahlstrom et al, Margaryan et al)
The primary drivers of a student’s technological innovation are their prior experience of education, and their tutors use of technology in their classrooms. (Dahlstrom et al, Margaryan et al)
Student’s use of technology is often quite conserative, particularly as a consequence of prior educational experience. (Dahlstrom et al, Margaryan et al)
If you want to level up your institutions technology use, you need to level up your staff use. (Dahlstrom et al, Margaryan et al)
Educators often quote lack of time and training as reasons for not experimenting. (Dahlstrom et al, Margaryan et al)
Students whose use of technology is class is unstructured tend to do less well on standardised testing than students who don;t use technology. (Fried)
If you are afraid that your students might be distracted by the devices they use informally in your class, you may well be right. (Fried)
Students don’t lead innovation in education. Teachers do.
Students are influenced in their technology use by several factors. The myth of Digital Nativeness is not, really, one of them. Students actively look to their tutors to show them how and why to use technology in their learning. Without this structure, their technology use tends to be fairly conservative, and at times, undermines learning goals and outcomes. Students are responsive to VLE usage, and this has an effect on their overall technology use. They are highly responsive to their tutors technology use, and to the technology use demanded of them by their courses. They value consistent, structured and relevant use of technology directly applicable at the time of demonstration.
If you want to innovate, depend on developing your teacher’s and your VLE.
If you want students to use technology well, their teachers will have to show them. By using VLE’s well. By demonstrationg how they should use technology in classes, and showing them why in contextually useful and approrpriate ways. Educators need not to assume their students are digitally fluent and technologically literate when it comes to their own education, but rather assume they will have to take the lead in developing these skills as part and parcel of their courses.
This involves using VLE’s well, and using technology themselves in the ways they will require or desire their students to use them.
If your staff don’t structure their students use of technology, their students use oif technology may well be damaging their learning.
To innovate, staff need time, training and support
Institutions need to support their teachers in deploying innovations, and in using the VLE. with time, resources and training. If you want your students to develop digital literacy, digital literacy needs to be fostered amongst staff. They are the primary vectors.
Educators need to be able to negotiate students desires for privacy, as well as excellence. Students don;t generally want their tutors to know about their private lives, and worry about finding out innapropriate things about their teachers when using social media. So, have a professional account, and have them use an academic account.
Dahlstrom, E., Walker, J.D., Dziuban, C. (2013). ECAR study of Undergraduate Students and Information Technology. [ONLINE] Available at:https://net.educause.edu/ir/library/pdf/ERS1302/ERS1302.pdf. [Last Accessed 15 October 2013].
Fried, C. B., (2008). In-class laptop use and its effect on student learning. Computers & Education. 50, pp.906-91
Margaryan, A., Littlejohn, A., Vojt, G. (2011). Are Digital Natives a myth or a reality? University student’s use of digital technologies.. Computers & Education. 56 (e.g. 2), pp.429-440
Argument and evidence
Dahlstrom et al are major contributors here. Their survey is of over 100’000 undergrad students across 14 countries. And it finds, clearly, that students want teachers to lead in technology. They want to be shown, in class, as part of their study, how and why they should use technology. They want better VLE usage. Their use of technology in learning, when left to their own devices, is conservative. Margaryan et al have similar findings. VLE usage by the course, technology requirements of the course, and tutors use of technology are the key drivers of technology uptake amongst students. Where courses used technology, there was some evidence of transfer into informal learning of technology usage by students.
Fried’s findings are that unstructured laptop use in traditional lectures correlates with lower self reported understanding of lectures, self-reported lower amounts of attention being paid, and lower scores on standardised tests, as compared to non laptop users. Laptop users admit to, in general, spending 17 out of 75 minutes emailing, IMing, surfing, game playing, etc. A student’s own laptop use was the largest reported distractor in class – it was larger than all other reported distractors combined. Unstructured laptop use damages learning.
Margaryan et al found no evidence for a digital native primary mechanism for the uptake of technology, rather arguing that institutional and tutor technology use had a larger effect than age on students technology uptake, as well as previous experience of technology use on their education. Tutors drive technology use and expectation amongst their students. Tutors suggested that the primary barrier to their experimentation with emergent technologies was time.
Both Margaryan et al and Dahlstrom et al find students using a small and limited range of fairly conservative technologies when left to their own devices, and requesting a fairly conservative range of pedagogies.
Margaryan finds a possible correlation between a subject’s VLE use and it;s student’s use of tech. Both Dahlstrom et al and Margaryan et al find students actively requesting more consistent and better VLE use by educators.
I’ve taken a few different MOOCs lately.
I signed up for the Moodle run MOOC, on Moodle. A potentially valuable-to-me course of instruction. I dropped it after the first session. I had no immediate use for it, and too many other immediately useful claims on my time. So the trade-off was never going to work for me.
In addition, the interface I found, well, complex. In an already marginal context, investing time and effort in learning the interface for learning was never going to work. Badging had not quite, but almost zero effect. The fact that badges were given for everything was also problematic. It’s difficult to attribute value to a reward stem when clicking with a mouse is rewarded. There’s a lesson here, but I need to contextualise it further, but it has a lot to do with getting your participants to value a meaningful rewards system. Rewards for everything feels a little “lollipops and electric shocks” to me. I’m an adult, engaged, already motivated and (as many sudents are) sophisiticated learner. Badges don;t feel a good fit for that profile. Additionally, I went to a Christian Brother’s school. If Behaviourism worked on me for things of that complexity, I’d be a priest by now. I’m not.
So, I dropped, rapidly, off the radar and became another of the non-completing horde.
The takeaways here are: keep your interface clear, easy to use, and simple. It should be intuitive. A good usability stress test is a must. Stress the utility of the course to emphasise it to your marginal students. Be careful with your rewards system. It has to be meaningful. A good rule of thumb is, if it doesn;t fit with Nielsen’s heuristics, it won’t fit with a learning context either.
What did work well on Moodle MOOC, for me, was the idea of intro videos. The MOOC instructors did their own lecture videos, but in the first one, they set aside time for the development team to intro themselves, from their workspaces. It was, at times, a little awkward and stilted, but good.
Foundations of Virtual Instruction, Coursera.
I signed up for Foundations of Virtual instruction with Coursera. A standard style xMOOC. Recorded presentations. Multiple Choice Questions. Transmission teaching with automated testing. I consciously dropped off the radar here. Again, utility was an issue. It’s US focused on k-12 teaching, and it’s quite specific to that context. For no particular reason. The course could very easily avoid that. Knowing about charter schools will not really help me get to grips with the design basics for Virtual Instruction. The course was too basic. The presentation and MCQ peppered seminars are unwieldy, awkward, and frankly, wooden.
Attention was tested, during lectires, with MCQ’s on in lecture facts and data. MCQ’s need to be well designed to work. They should, rarely, be without useful feedback. Here, the feedback was either not present, or broken. The delivery and content was wooden, so attention was difficult to maintain. It was difficult to escape the idea that the MCQ’s were there because the content was unengaging 0 the designer was actively afraid of disengagement. Additionally, what was being tested was information that I had no interest in, and gave me no insight at all into the foundations of virtual instruction.
All in all, a course I want to revisit, and look at so,me more, because, well, frankly, I can learn a lot from something I condisre is badly designed. Working out how to fix a thing, and why it doesn;t work is valuable.
Coursera Video Games and Learning.
Coursera’svideo games and Learning, from University of Wisconsin Madison, has me gripped. Initially at least. The utility to me is fairly clear. And, though not immediate – gbl is down the line for me – is clear enough for me to want to invest now for that long term payoff despite my short term , and insanely demanding, commitments. That’s a pretty good bargain to wring out of your online student. Amd it';s worth considering how they managed to get me to commit.
Clear utility – even though it’s a long term aim and goal – is enough to get me to reshuffle my current, significant, commitments. I know why I should do this course, what I’m going to get out oif it, and why that’s good.
The course level works for me. I’m not a total newbie, but I’m sufficiently ignorant so that it genuinely enlightens me. The course appears level tolerant to a degree. It feels like it might have a reasonably broad appeal.
Presentation style has been key too. The presenters come across as enthusiastic, competent, prestigious, and engaged. The lectures are not slideshows, read from a card or screen. They are shot so that the viewer has a students eye view, they are of classrooms, and they follow a particular pattern. Shots of the screen/slides in classroom are short, with only necessary detail, and are jumping off points for watching polished and prepared individuals speak. The lectirer is the focus. Hands wave, lecturers move around the room, shift their gaze, and speak to the room, and not the screen. In short, it’s an engaging presence, alive, and communicative. And that works.
There are some issues with presentation. Some lectures have upbeat muzak in the background, which is bad design. It’s distracting, and we know from Cognitive Load that if you distract students with music, they will learn less. Certainly, the lectures with music were ones that I retained less information from. Can the music, and trust your lecturers to engage, evoke emotion where they need to, and be as good as they are.
Other lectures had cute, cool, funny animations. Again, we kow that such sewductive details, as the Cogntivists call them, detract from your ability to follow what’s being said. Animations that are on task are useful, especially for things like processes. But where they are off task, they detract from the amount of attention you give to what’s being said. The coller the animation, the more it detracts. Once again, you;ve gone to the effort to find good, engaging, subject competent lecturers. Trust them to be engaging without gimmicks. Because they are.
NO MCQ’S. This was so joy inducing. I doidn;t have to rpove I was paying attention by remembering one random fact from a ten minute lecture. I don’t have to fail mid lecture at something.
Lesson learned: establish your presence, mnake it communicative and engaged, use your slides, but sparingly, and for very specific purposes – ideally flash them up with a short piece of text that the core and then switch back to the on task speaker who elaborates on them. Be passionate in what you say, how you say it, and setup your shots so that can be heard and seen. Think carefully about what you want your students to see, and shoot that, when you want them to see it. Construct your virtual lecture so that you know you have your students attention. so that they want to listen. MCQ’s will not shock your students into remembering. They are no replacement for carefully conceived insructional design, the power of the presenter, and carefull attention to motivation, cognitive load, and utility. Talk about the things that matter to your audience, and don;t penalise them for not remembering the things that profoundly don;t matter.
Sebatian Thrun in today’s Guardian argues that..well.He argues a lot of things, according to Jemima Kiss.
Here’s some of them.
He argues that education should learn a lot from gaming. This is, at best, contentious, and probably, in many respects, demonstrably wrong. The literature, the data, and the evidence – such as it is – seems to clearly come down against aspects of this. You get the same results, or better, from more conventional teaching techniques, and it costs you a whole lot less. Here’s a quote from the US military (on games, not on simulaitons), assessing 40 years of data. “…the research shows no instructional advantages of games over the other instructional approaches (such as lectures)”.
When a method is failing to beat lectures as a delivery vector for information, it’s in trouble. If we take transmission teaching as a baseline that should be beat, and there’s a good argument here for that, games don’t beat it. And cost more.
RE Clark has a good take on the metastiudies here “All of the different reviews currently available have reached almost identical conclusions….people who play serious games often learn how to play the game and some factual knowledge related to the game – but there is no evidence that games teach anyone anything that could be learned some other, less expensive and more effective way.” and “there is no compelling evidemce to suggest that serious games lead to greater motivation to learn than other instructional programs”.
The evidence, so far, seems clear here. Games are, at best, no more effective than cheaper alternatives, and are at times worse. Clouding the debate is the unreliability, problematic methodology, or lack of quantitative or qualititative data in the literature. This is an issue in the literature about education. And it’s well known that the reliability of a studies findings is related to it’s mthodological rigour. Simply put, badly designed studies are more likely to confirm the biases and premises of their designers than well designed one. A bad study is more likely to say what you want it to than a good one. In the article above, out of 4000 articles, reviewers found only 19 that assessed quantitative or qualitative data. 19. Astounding.
The review authors (Chen and O’Neill, Symposium paper, Training Effectiveness of a Computer Game, April 2005) argue that where positive outcomes were found, they were related not to gaming itself, but to the instructional design – the occasional use of standard and traditional instructional modes. They also note numerous claims for efficacy based on no evidence.
Thrun argues that the biggest principle is to go at your own speed. In fairness to him, the article doesn;t go into detail here. But still. There are problems with this. It looks like the most mental effort is deployed when students are a) confident in the general area of study b) not confident in the specific area of study c) not encountering too much new information d)
Throw in on top of that the problems with the Behaviorist gamification of education ideas – your students will chase the prize, not the process. If they can game the system and short circuit the reward system they will. If they were already motivated by the process, and you gamify them, they will possibly cease to be motivated by the process any longer. Gamifying has limited reproducability, and the lure of the reward diminishes.
We can bin the game based learning as panacea project.
Thrun argues that working at our own pace is the main principle we should be aware of. The article doesn’t go into detail here. And it should. But let’s plough ahead anyway, and see what we come up with.
There’s a couple os issues here. How we define what your own pace is (and who defines it) and what the relationship between p[erceived difficulty and motivation is. Let’s look at the latter first. We know that people tend to work hardest when they a) feel confident in the general area of study they are engaged in b) feel less confidence in the specific area they are studying c) have a sense that what they are doing is not too easy d) have a sense that what they are doing is not impossible e) feel the environment they are working in will alloow them to achieve f) the challenge level is related to the amount of prior knowledge they have.
We should also take into account that a respoected figure – an instructor or respected peer – can temporarilty raise the bar on what we think is impossible, allowing us to deploy even more effort at even harder tasks, as lomng as we achieve those tasks in the end.
So, we also need encouarging figures who are respected, and have carefully calibrated the task at hand to hit that ever narrower sweet spot.
So. We know that people need a degree of challenge, a significant degree of challenge, to deploy the most effort, and that degree of challenge needs to stop short of the impossible. We need to put them in environments that they think will either allow or help them to achieve, or, at least, won;t be obstacles. And we need to know what they know so we can tailor the challenge level. Thrun might mean this. But I don’t feel that he does. The above is subtle, difficult, hard to hit, requires careful design, feedback and monitoring.
In addition there’s the first idea – who defines your pace. Another aspect of maximising effort is using a medium that you perceive as difficult (if it’s well designed, it shouldn;t actually be difficult, but the perception is key). When we use media that we perceive as difficult (for some people books) we deploy more effort. When we use media we perceive as easy (for some people, online learning is perceived as easier) we deploy less effort. And we tend to choose the media we think will be easier. This shifts with the individual, and the culture. People who use audio visual a lot may actually perceive book based learning to be easier. And vice versa. What this measn for Thrun’s idea is that, well, the individual may not actually be a good guage of their own pace. Add in the idea that individuals are not actually particularly good, often at guaging their own learning ( individuals assessments of learning versus what they test at are often at odds) and you have a much more complex interaction than Thrun suggests.
I’ve just booted up in #learnmoodle, a MOOC, for Moodle Beginners.
It’s very badgey. Which will be interesting, as I haven’t done a badge based course yet. I’m not a badge earner, personally, but I’m really curious about the experience.
I’m engaging with the opening seminar, covering the pedfagogical background to Moodloe. Social Constructivism, Learning by making, and that we learn by obeserving peer activity, constructing meaning ourselves, and creating artefacts.
This, it’s pretty well known, is only partially true. Perhaps. In certain circumstances. It;s the standard Cinstructivist set of arguemtns that a ) never seem to refer to data and b) always refer to theories, and not evidence.
It looks like it’s liklely that advanced learners may benefitr more from Socially Constrsuctivist Practice, and that novice and intermediate learners benefit more from instructivist approached, with direct instruction.
I admire Moodle. But, as with any pedagogy, the broad base of learners are better supported by a broad base of pedgagogies. And ones that take into account Prior Knowledge, and what that means for the student, are key.
The initial seminar had some great points. Meeting the Moodle Team in situ is cool, the seminar presenter was late to start, becasue of a house move. And his discussion of tbis was really disarming. He was flkuent, fluid, and unflustered, and, even though late, his take on telling us added to the presentation.
It did however, confirm in me my hatred of slides. Nothing personal here, but I get so little from slide technology that covers large amounts of material, and so much more from interaction, that, well, I need to work out ways to stitch together the best from both experiences.
Slides do have advantages. Structure, reproducability (but this is typically a limited avantage in many cases).
Tips for me:
I noted in the slides that the screenshots are not that good for me. I have difficulty with forms, new interfaces, and fuind them difficult to navigate. So, for me, using annotated screenshots, or live screenstreams, are good. The seminar used some, and at times didn’t.
The course descriptors are laid out, and you can scoop down trhough it and skip ahead or back based on need, or scoop up the badges that are new to you. So you can snag week four on day one if that’s what you need to do.
There’s a post over here asking how we can create communities that function well in educational contexts.
It’s a huge question, and one I’ve considered a lot.
Here’s , mainly for me, a summary of what I think might help.
It’s coloured by Communities of Inquiry, and Communities of Practice, the work of Rita Kop,, Bandura, Cognitivism (particularly Sweller, Mayer, Kirschner and Clark) and my own semi sophisticated ideas of what makes us tick.
The instructor provides several key things. Diane Laurillard says that instructors have the greatest influence on the learning landscape in which students will participate. They have a huge shaping impact on the learner’s experience. It may not be ideal, perfect, or the best way to do it, but that’s the world we live in. We have limited resources, and a lot to do with them, so, instructors remain central.
Instructors are engines of value
Instructors provide value. When students get feedback from someone they feel has status, expertise, insight and gives a damn, they tend to value the course a lot more, and work harder. So, give feedback. And take care with how you are seen. More of this below.
Instructors as psychological architects
Students impart value to educational experiences that feel as if they are teaching. And teaching something worthwhile. People will work harder and more if they are within a community that feels well designed.
And that feedback the instructor gives? Modelling the type of interaction you want to happen in your community as an instructor makes it hugely more likely that participants will engage with each other in that way. Give good, insightful, and respectful feedback fairly quickly, and have mods do the same, and watch the number of critical engagements rise. It;ls not surefire, cast iron, failsafed guaranteed, but it does up the num,ber of critical engagements, and their quality.
Utility, worth and why the hell am I doing this.
Adults typically need to know what, how, why, and what the benefit is. If you want adults to engage in a community based learning process, then you have to tell them that they are going to engage in a community based learning process. You have to tell them what that looks like. And you have to tell them how it works. And you have to tell them how it will enable them, empower them, and let them achieve their goals.
We are motivated to do things which give us control over our world. Telling people, explicitly, how the community process they are engaged in will facilitate that expression of control is a huge motivator. Showing them examples that they are involved in is key. Make this an actual seminar, right at the beginning.
Be clear about the utility and worth of the course – this is what it will cover, this is what it will enable you to do – tie the pedagogy in to that – we’ll be using networked learning, so sharing resources over social media, posting blogs and commenting, and making learning visible – as this will help you develop the technical and knowledge shifting skills, and digital literacies that will help you take control of your own learning during the course, and afterwards as well. The community engagement will help you find, sift, sort and critically select from the huge volume of information that;s out there. and provide you with strategies to pick out what you need to push your own learning, ideas, projects and skills forward, as well as techniques that will help you to push our own learning forward, and learn more efficiently.
I once was lost and now am found
It’s easy to get lost. And, once lost, the pace of a course’s progress can seem like a ships funnel disappearing more and more rapidly over a distant horizon. Support your novices – give them easy to use, carefully designed resources to get to grips with the tools they need. Choose a simple, easy to use set of interfaces, and help people who don’t know their way around them.
Provide easy to access centralised experiences that draw everything together. So a daily or weekly seminar or seminars that covers everything in reasonable chunks. When resources are everywhere a central location is key, and helps anchor people to each other, and the experience. Seminars give that location, and they make it flexible – it can reflect things as they evolve, it can include and highlight individual participants, it has a q and a option, and people can ask for help. And it adds to the sense that this is a designed educational endeavour with feedback and instructor access – key to maintaining motivation in online endeavours.
Good moderators a massively open and enjoyable experience make
Hugely key. Cheerleading, commenting, demonstrating by example, picking up the people who are falling through the net, aggregating and sharing, helping people who are stuck in development hell for their blog post or idea – here’s the resource you need to help with that, talk to x about y for a quick answer, try this tech not that, here’s a howto on mysql. Whatever. Good mods are key, and visible.
Good mods, and instructors will come across as knowledgeable – not perfectly so, but competently so – passionate and engaged, ideally have status, or something that marks them out as prestigious, are encouraging, set demanding but achievable goals, give good, targeted accurate recognition, and give feedback. And they are hugely encouraging if they focus on community building with these characteristics in play.
Take merciless advantage of the early optimism
That optimistic lift at the beginning of the course? That’s when you wheel out your big guns. A carefully crafted and put together experience for your learners of how community and networked learning can benefit them an be a huge fillip to community building. This is where you get them The bums are on the seats, the eyes are open, and there is expectation on the virtual room. Tell them how to do it, show them how to do it, be enthusiastic, and be very together, planned, and precise. And provide a tangible experience that gives your participants a real experience of how e community engagement you want from them will enable them to empower themselves.
I put together a presentation based on the #whyopen survey responses. It’s a mindmap of the different ideas, with some embedded youtube videos, and clickable links to additional resources on the various ideas that people came up with. Click on the small icons to the right of the graph nodes, or bubbles, to open the resources in a new window.
Any suggestions are welcome, and if anyone wants editing privileges, just ask.
It’s also incomplete. But if open is one thing, it;s probably unfinished…