Wednesday 14 April 2021

The curious case of long videos: how research evidence, institutional data and experience struggle to trump gut instincts

This post by Martin Compton from UCL was originally posted on the ALT blog here, https://altc.alt.ac.uk/blog/2021/02/the-curious-case-of-long-videos-how-research-evidence-institutional-data-and-experience-struggle-to-trump-gut-instincts/#gref

The rapid changes to the ways in which most are teaching at the moment have led to some recurring debates that are surprisingly persistent despite what I would argue is strong contrary evidence. Fortunately, colleagues are rarely rude, deliberately divisive, dismissive and provocative like the Times Educational Supplement piece that appeared during the autumn term of 2020 (Anon, 2020). In this article an anonymous academic berated educational ‘evangelists’ for trying to force new teaching ‘fads’ on resentful academics, who apparently burn with resentment at being constantly torn from their research and burdened by inanities like teaching. The colleagues I have in mind, by contrast, are almost universally rational and reasonable and do take teaching seriously.  Nevertheless, there are these recurrent areas where rationality is usurped by a refusal to accept what should be compelling evidence for good practice. As a consequence, they can sometimes find themselves in what I see as an equivalently blinkered position as the provocateur in the TES. My primary focus here will be on discussions about the length of videoed ‘lecture’ content.

Young woman sitting outside in the sun looking at her computer screen, with papers at her side.
Photo by Windows on Unsplash

The enforced ‘pivot’ to emergency remote teaching and the subsequent transitions to online teaching in the academic year 20-21 have ranged from significant to total. The efforts and outcomes have been varied with high-profile complaints centering on a narrative of financial value of online teaching that often mask the quietly successful or, in some cases, transformed approaches. The false equivalence often invoked between fees for ‘just Zoom lectures’ and a Netflix subscription is particularly unhelpful. If one thing is clear to me, it is that the vast majority of academic colleagues have gone way above and beyond, and have adapted with students’ best interests at heart. Much of this has been built on the often understated work of learning technology, instructional design and academic development teams. Even so, one of the most persistent disputes centres around the issue of video duration.

Those of us in support roles have built productive relationships; we are widely trusted; we are persuasive; our credibility is rarely challenged. While debate continues around such things as what constitutes effective and sufficient asynchronous content or the cameras off/ cameras on debate for live sessions, it is the issue of video length for recorded content that most lacks level-headedness. I think it is fair to argue that the research evidence is compelling in terms of the relationship between engagement, viewing time and video length. Guo et al.’s (2014) data from nearly 7 million MOOC videos and Brame’s (2016) connection between video length and cognitive load theory indicate that optimal viewing time is somewhere between 6 and 9 minutes. Institutional data from the lecture capture tool strongly buttresses the research evidence. Additionally, there is the experience of colleagues who have taught online for several years (including me) who can offer compelling experiential cases. Further layered might be evidence from educational videos on YouTube such as the study by Tackett et al. (2018) which found the medical education videos on one successful channel averaged just under 7 minutes and focussed on one core concept. Yet, that optimal time of 6-9 minutes is often received by academics with horror.  

The first and most common counter argument centres on what I would consider to be a false time equivalence between the conventional expectation of lecture length (and content) and the length of videos that might replace them. When I say chunk content I am NOT arguing for 6 x10 minute videos to replace a 1 hour lecture.  If a lecture is scheduled for 1 hour on campus then around 50 minutes of that might be usable for logistical and practical reasons. Of those 50 minutes it is unlikely for those 50 minutes to be crammed with content. There are likely to be cognitive breaks and opportunities for reinforcement in the form of discussions or questions. There is likely to be time for questions from students, time for connections to prior learning, opportunities to elicit latent knowledge and experience, chances to connect the subject to the assessments. None of this need happen in the videos. In discussions with colleagues, we typically conclude that a 50-minute lecture might contain 2 or perhaps 3 key or threshold concepts. These are the essential or ‘portal’ ideas that open doors to broader understanding and that lectures are an excellent medium for. The essential content can thus be presented in much shorter chunks. Say, for the sake of the argument, this is 2 x 10 minutes. 

‘Ah!’ some then say, ‘This is all very well but students will feel short changed!’ There is a huge underlying tension and much of it feeds the ‘refund the fees’ arguments and is actually not assisted by clunky contact time equations. We must not ignore these issues but neither should we pander to them. If we accept the logic of the paragraph above, then we should challenge this conceptualisation. If the alternative is a rambling 60 minute video that the statistics show few will reach the end of only because that’s what students think they have paid for, then we are not working in a research-informed way. To challenge it, we need to share the rationale for our learning designs and tool choices with students; be open with them about our pedagogies; rationalise our approaches. I would argue that we should pre-empt the  ‘value for money’ arguments by talking students through the logic expressed above. Then, for added oomph, layer on the additional benefits:

  • Videos can be paused, rewound and rewatched which also means the pace can be faster and there’s no need for repetition.
  • Videos can increase access and accessibility.
  • The live contact time can be dedicated to i) deeper level, higher order discussion ii) application or analysis of the concepts that are defined in the videos or iii) opportunities for students to test their understanding or to give or receive feedback. 
  • Upload (for lecturers) and download (for students) time is limited and reduces the potential for errors on weak connections or where VLE or video-hosting systems have been struggling.
  • It pushes lecturers to revisit content and to reconsider threshold concepts and vital content.

Finally, it is not uncommon to hear colleagues argue that, despite the evidence from ‘other’ disciplines, students in their discipline like videos that are 1 or 2 hours long. Perhaps because they are perceived to be wired differently, perhaps because it seems intuitive to have fewer videos that they can dip in and out of or perhaps because the students insist that this is their preference. 

A timer indicating 09:59:59, suggesting that the video has slightly overrun the optimal viewing time.
Photo by Markus Spiske on Unsplash

Every time I have a variant of this conversation I am left pondering how it is, in a centre of discovery, in a culture of research, that actual experience, research and learning can be so easily dismissed. And this even before we get into discussions about whether students are adequately predisposed to distinguish what works from what they prefer. I suspect that these sorts of conversations will be familiar to anyone working in an academic development or learning technology support capacity. 

These sorts of conversations have happened with surprising regularity this year, and so receiving positive responses from colleagues who are prepared to consider the evidence, is incredibly rewarding. A senior academic colleague in our Computing department attended one of my online CPD workshops on curating and creating video where this discussion took place. Persuaded by the arguments presented here, he took the short video plunge and was sufficiently impressed with the student feedback that he sent me a summary (unsolicited) of it, where students said:

  • I found the videos really engaging. Having the videos split into sections made it a lot easier to learn.
  • I liked the way the lecture was spilt into different videos because it never felt like it was too long or boring.

I continue to struggle to fully understand what makes video length such a common sticking point. Perhaps the evidence challenges intuition? Perhaps it relates to how committed we are to the lecture/ seminar structures in HE? Whatever it is, it does make the epiphanies and successes like the one described above all the more special. 

References

Anon (2020) Pedagogy has nothing to teach us. Times Educational Supplement. Available: https://www.timeshighereducation.com/opinion/pedagogy-has-nothing-teach-us [accessed 22 January 2021]

Brame, C. J. (2016). Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE—Life Sciences Education15(4), es6.

Guo, P. J., Kim, J., & Rubin, R. (2014, March). How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the first ACM conference on Learning@ scale conference (pp. 41-50).

Tackett, S., Slinn, K., Marshall, T., Gaglani, S., Waldman, V., & Desai, R. (2018). Medical education videos for the world: an analysis of viewing patterns for a YouTube channel. Academic medicine93(8), 1150-1156.

photo of Martin Compton 

Martin Compton is an Associate Professor working in the Arena Centre for research-based education at UCL. email: martin.compton@ucl.ac.uk Twitter @mart_compton 

No comments:

Post a Comment