Wednesday, 16 June 2021

The BLE wins the 2021 Roger Mills Prize for Innovation in Learning and Teaching

I am delighted to announce that the BLE has won this year's Roger Mills Prize for Innovation in Learning and Teaching for our approach in designing, developing and sharing our digital skills awareness courses for incoming students and teaching staff.

Ensuring students have appropriate digital skills upon entering the world of higher education is essential for their academic success, especially in these changing times of flexible, distance and blended delivery. Similarly, teaching staff have a responsibility to meet the level of digital proficiency now expected by their institutions. Our response to this need was the creation of two short online courses: the Digital Skills Awareness Course for new and prospective students (DSAC) and the Digital Skills Awareness Course for Teaching Staff (DSAC-T).

Our two courses have been designed in Moodle, trialed and adopted by BLE partners and are now openly available to the UK Higher Education sector. 

These courses are “blueprints”, which can be repurposed and adapted to suit the needs of the institution that adopts them. For example, the DSAC has been adapted by the University of London’s PGCert Learning and Teaching in Higher Education programme and recommended as a supplementary module. 

The approach we have taken offers benefits to over-stretched digital education and staff development teams across the BLE partnership and beyond.

More information at www.ble.ac.uk/digitalawareness

The judging panel provided the following feedback:

The promotion and support of digital skills for academics and university professionals as well as students is central at this pivotal point in the move to blended technology supported learning, and what was remarkable about this project were the ways in which transferability and impact were built into its very identity, through its construction on the principles of Open Education Resources. The success of this approach and its value to the sector as a whole is validated by the issuing already of 49 OER licenses by the Bloomsbury Learning Exchange to other institutions to use and adapt the Digital Skills Awareness course for their own purposes.


Friday, 21 May 2021

Open book exams: open season for cheaters or a better form of assessment?

This post by Gwyneth Hughes at UCL was originally posted here https://blogs.ucl.ac.uk/ioe/2021/05/19/open-book-exams-open-season-for-cheaters-or-a-better-form-of-assessment/

photo of exam hall

The start of the pandemic in March 2020 caused universities to do a rapid pivot from the well-entrenched invigilated, timed, unseen exams to online tests mostly taken at home.

Software can monitor students taking exams in their own homes by using video or proctoring methods, or by locking down the examinee’s computer. But by far the most straightforward option is open book exams with extended timescales. This is mostly what happened at the University of London. But does this mean better assessment or more cheating?

For an open-book exam, students can search online and access books, notes, and other available resources online or in print. If the exam writing window remains similar to that of previous exams with perhaps some extra time for uploading answers, then there is not much opportunity to look up answers and students will not have any advantage. However, if students are given longer ­– for instance, a 24 hours gap between releasing the exam questions and sending in answers – then they can do some research for their answers.

Students like online exams but not just because they can cheat more easily

A study from the Centre for Distance Education | University of London has shown that students doing online exams for the University of London’s distance learning programmes preferred the online exams done from the comfort of their own homes without the pressure to travel to an examination centre and with a bit more freedom from relying on memory alone. Cynics might have it that open book exams give students carte blanche to plagiarise copy and collude with other students and no wonder they liked the experience. But cheating is not inevitable. The study provides  evidence that some programme teams changed exam questions for the online shift to ensure that students could not copy and paste answers.

If questions requiring memorisation were replaced with more probing ones and questions that require application of knowledge, then cheating would become much more difficult. It is also possible that these better designed exams will encourage students to learn more deeply in future.

Some markers also noticed that giving students more time to write their answers meant they could make better use of references and correct errors. Again, this indicates that students could be advantaged by the online exams.

Rethinking exam design

The big worry about exams, and indeed other forms of assessment, is student cheating – but that does not mean that heavy-handed electronic monitoring, restriction on using resources or plagiarism detection software is the answer. The pivot to online testing has encouraged exam designers to think more about how exams support student learning. Even if there is a post-pandemic return to attendance in person next year, many programmes at the University of London will continue with online open book exams and/or move to coursework assessment, which is the ultimate open book experience.

Here’s one tough question: Will more discussion of how to prevent plagiarism and cheating through improving assessment design follow?

Friday, 14 May 2021

Dealing with dissonance: now is the time for open, critical and mediated reflection on remote teaching and learning

This post by Martin Compton from UCL was originally posted on the UCL Reflect blog here https://reflect.ucl.ac.uk/mcarena/2021/04/16/dealing-with-dissonance-now-is-the-time-for-open-critical-and-mediated-reflection-on-remote-teaching-and-learning/

The necessary, pandemic-enforced modifications that teachers and lecturers made over the last year have often been nothing short of miraculous. Most frustrating perhaps is where effort has been huge but responses (either engagement levels or evaluation responses) have been less than hoped for. I have heard colleagues desperate for a return to ‘normal’ and others very keen to hold on to and develop approaches they have honed or learned from scratch. Whatever teaching, learning and assessment look like next year, there will no doubt be degrees of ‘blendedness’, hybridity and necessary flexibility. Whatever our disciplines, it makes sense to take a moment to reflect on the experiences of the year and to consider what worked, what didn’t, what we WANT to keep, what we HAVE to keep and what that means for our workloads and impacts on our own and our students’ mental health (I originally typed ‘wellbeing’ but am starting to feel as though this word is being stripped of tangible meaning and weight). Anyway, so far; so obvious.

woman in glasses looking at screen full of computer code

image: geralt via pixabay

One of the things that has become clear over my years working in teacher and lecturer development is that ‘reflection’ as a process is not necessarily something that happens naturally for us all. And, even where reflection is happening, we can find ourselves (for SO many reasons) not modifying our behaviours and approaches. If we are going to properly address the issues in the paragraph above- in context- it may be that we need time (!) and perhaps some form of mediated dialogue to push reflection. As part of that, we need to open ourselves to candid and perhaps even difficult challenges to our thinking. One way we can do this is to see how far we as individuals (or collectively as members of a department, faculty, institution or disciplinary ‘tribe’) may be subject to cognitive dissonance and immobile thinking.

Without being immersed too deeply in the psychology, I am leaning on the language of ‘cognitive dissonance’ (Festinger, 1957– good summary here for non-psychologists) and the ‘fixed and growth mindset’ conceptualisation of Carol Dweck (core ideas summarised in 9 min video by Prof Dweck here). Cognitive dissonance is anxiety caused by our own behaviours that challenge what is known (for teachers, a belief in the self and what constitutes effective teaching is important). ‘Forced compliance behaviour’ is the most useful way to think about this in the Covid context because the vast majority of lecturers and teachers have had to act in ways that conflict with beliefs and pre-conceptions about what equates to good teaching and what shapes us- what defines us- as teachers. Pre-pandemic, ‘digital education’ could be ignored and the research dismissed where there was no perceived need or obligation to engage. Clumsy edicts without clear rationalisation or evidence and behaviouristic award systems for degrees of compliance have often led to cynical compliance or overt resistance. Witness the frustratingly frequent phenomena of VLE ‘scrolls of doom’ and too oft-repeated references to ‘death by PowerPoint’.

hammer banging in a bolt while a spanner tackles a nail

image: stevepb via pixabay

When Covid hit and the ‘emergency response’ morphed into something much longer, there was an inevitable and essential upskilling and mode switching but these pre-existing tensions framed persistent deficit narratives. When enforced, those most resistant (and fearful) are most likely to be subject to confirmation biases and this is completely normal and understandable but anxiety inducing and ultimately a barrier. Dweck’s notions of fixed and growth mindsets are useful ways of framing this, especially if ‘mindset’ is expanded to include departmental or institutional cultures.

Like many, I championed compassion as a driver and for it to be at the forefront of our pedagogy in terms of the way we interacted and supported students as well as centring care in expectations and sensitivities around how we worked with colleagues. I don’t want colleagues to be anxious! According to Festinger, to resolve the anxiety and stagnation, something needs to change: beliefs and/or actions. The pandemic forced us to change our actions. But to what extent have we fully embraced the wisdom of the research, the learning techs and instructional designers rather than ploughing on with what is most familiar (or a replica of that)? And in terms of beliefs, how much have we built in time for mediated reflection that can reframe negative experiences in our actions? Do we understand why some activities are more likely to work than others? Are our individual and collective minds open to the difficult questions of what scholarship and experts say- weighted against our ‘intuitions’? I have witnessed how the two big aspects of HE pedagogic conservatism – lectures and examinations- have been challenged. In some ways their persistence as defaults in the context of a HUGE library of pedagogic scholarship can be framed as an example of collective cognitive dissonance.  I felt that those that missed/ craved the lecture most were often those that suffered most; not because of ability or kit differentials but because of how wrapped in their identities the lecture is: teaching as performance. It is fascinating to witness how quickly debates about the future of lectures, for example, have become something of a false dichotomy, framed as: ‘your way is just fusty, boring lectures’ versus ‘you want to throw the brilliant lecture baby out with the pedagogic bathwater!’ This lack of nuance and this doubling down may be seen as a reflection of the populist zeitgeist but are we not supposed to be centres of research, debate and critical engagement?! We need time and mediation and space for openness to explore disciplinary-specific understandings, needs and possibilities.

large auditorium mostly full of people waiting for a lecture

image: alieino via pixabay

We can’t get everyone to change and shouldn’t force people to change. But in the clamour to get back to normal we are in danger of conflating the affordance of digital education more broadly with the experiences of 2020-21. What I’m saying here is as much about cultures and leadership as it is about individual examples of ‘cognitive dissonance’. Whilst this IS a challenge for colleagues to think critically about their work and thinking this is not meant to be read as a critique of that work. So, for people in my sort of role we have delicate job: I do NOT want to be seen to accuse anyone of closed-mindedness, entrenched thinking, suffering from confirmation bias…but that shouldn’t stop me from trying to push challenging conversations. How do I engage colleagues without the arms folding though?

In my view, those that are at the centre should be provoking and mediating discussions and debate around these issues; prepared to challenge intuitive discourses. Whilst I do not have the gift of time to offer, this is one of my goals this coming year and I want to take as many people as I can with me. I believe that cognitive dissonance is a useful vehicle for considering how powerful our mindsets are, opening this particular reflective doorway may be one way to start reconciling what has been a manic year.

Dweck, C.S. (2006) Mindset: How You Can Fulfil Your Potential. New York: Random House.

Festinger, L. (1957). A Theory of cognitive dissonance. Stanford, CA: Stanford University Press.


 

Martin Compton is an Associate Professor working in the Arena Centre for research-based education at UCL. email: martin.compton@ucl.ac.uk Twitter @mart_compton 

Wednesday, 14 April 2021

The curious case of long videos: how research evidence, institutional data and experience struggle to trump gut instincts

This post by Martin Compton from UCL was originally posted on the ALT blog here, https://altc.alt.ac.uk/blog/2021/02/the-curious-case-of-long-videos-how-research-evidence-institutional-data-and-experience-struggle-to-trump-gut-instincts/#gref

The rapid changes to the ways in which most are teaching at the moment have led to some recurring debates that are surprisingly persistent despite what I would argue is strong contrary evidence. Fortunately, colleagues are rarely rude, deliberately divisive, dismissive and provocative like the Times Educational Supplement piece that appeared during the autumn term of 2020 (Anon, 2020). In this article an anonymous academic berated educational ‘evangelists’ for trying to force new teaching ‘fads’ on resentful academics, who apparently burn with resentment at being constantly torn from their research and burdened by inanities like teaching. The colleagues I have in mind, by contrast, are almost universally rational and reasonable and do take teaching seriously.  Nevertheless, there are these recurrent areas where rationality is usurped by a refusal to accept what should be compelling evidence for good practice. As a consequence, they can sometimes find themselves in what I see as an equivalently blinkered position as the provocateur in the TES. My primary focus here will be on discussions about the length of videoed ‘lecture’ content.

Young woman sitting outside in the sun looking at her computer screen, with papers at her side.
Photo by Windows on Unsplash

The enforced ‘pivot’ to emergency remote teaching and the subsequent transitions to online teaching in the academic year 20-21 have ranged from significant to total. The efforts and outcomes have been varied with high-profile complaints centering on a narrative of financial value of online teaching that often mask the quietly successful or, in some cases, transformed approaches. The false equivalence often invoked between fees for ‘just Zoom lectures’ and a Netflix subscription is particularly unhelpful. If one thing is clear to me, it is that the vast majority of academic colleagues have gone way above and beyond, and have adapted with students’ best interests at heart. Much of this has been built on the often understated work of learning technology, instructional design and academic development teams. Even so, one of the most persistent disputes centres around the issue of video duration.

Those of us in support roles have built productive relationships; we are widely trusted; we are persuasive; our credibility is rarely challenged. While debate continues around such things as what constitutes effective and sufficient asynchronous content or the cameras off/ cameras on debate for live sessions, it is the issue of video length for recorded content that most lacks level-headedness. I think it is fair to argue that the research evidence is compelling in terms of the relationship between engagement, viewing time and video length. Guo et al.’s (2014) data from nearly 7 million MOOC videos and Brame’s (2016) connection between video length and cognitive load theory indicate that optimal viewing time is somewhere between 6 and 9 minutes. Institutional data from the lecture capture tool strongly buttresses the research evidence. Additionally, there is the experience of colleagues who have taught online for several years (including me) who can offer compelling experiential cases. Further layered might be evidence from educational videos on YouTube such as the study by Tackett et al. (2018) which found the medical education videos on one successful channel averaged just under 7 minutes and focussed on one core concept. Yet, that optimal time of 6-9 minutes is often received by academics with horror.  

The first and most common counter argument centres on what I would consider to be a false time equivalence between the conventional expectation of lecture length (and content) and the length of videos that might replace them. When I say chunk content I am NOT arguing for 6 x10 minute videos to replace a 1 hour lecture.  If a lecture is scheduled for 1 hour on campus then around 50 minutes of that might be usable for logistical and practical reasons. Of those 50 minutes it is unlikely for those 50 minutes to be crammed with content. There are likely to be cognitive breaks and opportunities for reinforcement in the form of discussions or questions. There is likely to be time for questions from students, time for connections to prior learning, opportunities to elicit latent knowledge and experience, chances to connect the subject to the assessments. None of this need happen in the videos. In discussions with colleagues, we typically conclude that a 50-minute lecture might contain 2 or perhaps 3 key or threshold concepts. These are the essential or ‘portal’ ideas that open doors to broader understanding and that lectures are an excellent medium for. The essential content can thus be presented in much shorter chunks. Say, for the sake of the argument, this is 2 x 10 minutes. 

‘Ah!’ some then say, ‘This is all very well but students will feel short changed!’ There is a huge underlying tension and much of it feeds the ‘refund the fees’ arguments and is actually not assisted by clunky contact time equations. We must not ignore these issues but neither should we pander to them. If we accept the logic of the paragraph above, then we should challenge this conceptualisation. If the alternative is a rambling 60 minute video that the statistics show few will reach the end of only because that’s what students think they have paid for, then we are not working in a research-informed way. To challenge it, we need to share the rationale for our learning designs and tool choices with students; be open with them about our pedagogies; rationalise our approaches. I would argue that we should pre-empt the  ‘value for money’ arguments by talking students through the logic expressed above. Then, for added oomph, layer on the additional benefits:

  • Videos can be paused, rewound and rewatched which also means the pace can be faster and there’s no need for repetition.
  • Videos can increase access and accessibility.
  • The live contact time can be dedicated to i) deeper level, higher order discussion ii) application or analysis of the concepts that are defined in the videos or iii) opportunities for students to test their understanding or to give or receive feedback. 
  • Upload (for lecturers) and download (for students) time is limited and reduces the potential for errors on weak connections or where VLE or video-hosting systems have been struggling.
  • It pushes lecturers to revisit content and to reconsider threshold concepts and vital content.

Finally, it is not uncommon to hear colleagues argue that, despite the evidence from ‘other’ disciplines, students in their discipline like videos that are 1 or 2 hours long. Perhaps because they are perceived to be wired differently, perhaps because it seems intuitive to have fewer videos that they can dip in and out of or perhaps because the students insist that this is their preference. 

A timer indicating 09:59:59, suggesting that the video has slightly overrun the optimal viewing time.
Photo by Markus Spiske on Unsplash

Every time I have a variant of this conversation I am left pondering how it is, in a centre of discovery, in a culture of research, that actual experience, research and learning can be so easily dismissed. And this even before we get into discussions about whether students are adequately predisposed to distinguish what works from what they prefer. I suspect that these sorts of conversations will be familiar to anyone working in an academic development or learning technology support capacity. 

These sorts of conversations have happened with surprising regularity this year, and so receiving positive responses from colleagues who are prepared to consider the evidence, is incredibly rewarding. A senior academic colleague in our Computing department attended one of my online CPD workshops on curating and creating video where this discussion took place. Persuaded by the arguments presented here, he took the short video plunge and was sufficiently impressed with the student feedback that he sent me a summary (unsolicited) of it, where students said:

  • I found the videos really engaging. Having the videos split into sections made it a lot easier to learn.
  • I liked the way the lecture was spilt into different videos because it never felt like it was too long or boring.

I continue to struggle to fully understand what makes video length such a common sticking point. Perhaps the evidence challenges intuition? Perhaps it relates to how committed we are to the lecture/ seminar structures in HE? Whatever it is, it does make the epiphanies and successes like the one described above all the more special. 

References

Anon (2020) Pedagogy has nothing to teach us. Times Educational Supplement. Available: https://www.timeshighereducation.com/opinion/pedagogy-has-nothing-teach-us [accessed 22 January 2021]

Brame, C. J. (2016). Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE—Life Sciences Education15(4), es6.

Guo, P. J., Kim, J., & Rubin, R. (2014, March). How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the first ACM conference on Learning@ scale conference (pp. 41-50).

Tackett, S., Slinn, K., Marshall, T., Gaglani, S., Waldman, V., & Desai, R. (2018). Medical education videos for the world: an analysis of viewing patterns for a YouTube channel. Academic medicine93(8), 1150-1156.

photo of Martin Compton 

Martin Compton is an Associate Professor working in the Arena Centre for research-based education at UCL. email: martin.compton@ucl.ac.uk Twitter @mart_compton 

Wednesday, 31 March 2021

Google Jamboard - an invaluable ally

Lucy Trewinnard, Digital Education Associate at Birkbeck, University of London writes exclusively for the BLE blog about Google Jamboard 

A nationwide move to online teaching saw lecturers put away their dry wipe markers and erasers and start testing out the array of digital whiteboards available to them. 

Digital Whiteboards are not just a replacement for where an educator highlights notes during a class, but they also give the student the pen - inviting collaboration and idea sharing.

What is Google Jamboard and how does it work? 

Jamboard is Google's answer to the digital whiteboard. Aside from being a 55-inch screen hardware you can buy - Jamboard is also browser and app-based piece of software residing in the Google Cloud allowing real-time annotation and collaboration (for free)

A board invites its users to "Jam" by offering the ability to: 

  • Write, draw and mind-map
  • Sketch (Google's own Image recognition technology also boasts it can turn your sketch into a polished image) 
  • Add images straight from Google's image search function
  • Add Google Docs, Sheets or Slides
  • Collaborate - with up to 25 users being able to work on a "Jam" at once. 
  • Backup to the Cloud - the Jamboard's save automatically, meaning that you can re-visit them later.


Digital Whiteboards provide spaces for students to work collaboratively with each other, in both live sessions and out of class. Dr Becky Briant (Department of Geography, Birkbeck, University of London) and Dr Annie Ockelford (School of Environment and Technology, University of Brighton) talk here about their experience teaching with Jamboard - as both a synchronous and asynchronous tool with, used within small and large groups.

 


How do digital browser/app based whiteboards differ from integrated whiteboards (collaborate, MS Teams, Zoom)?

A lot of the platforms being used across higher education institutions already have their own answers to a digital whiteboard. Collaborate Ultra, MS Teams and Zoom all have whiteboard features which can be used effectively in teaching - as a method for collecting students’ thoughts and responses in discussion. However, there are limitations to this - being unable to share images, in most cases there is no ability to save the whiteboards that have been created (which also means no editing later) and not always being large enough for everyone to contribute. 

What is key to Jamboard (or other digital whiteboards used within Digital Education) is the versatility of how these tools can be used as tasks as a feedback, for diagram/image annotation, as a group project area, or live question and answer responses... or just as a space for gathering thoughts. This versatility allows students to engage in discussion dynamically across multiple different learning styles. 

Limitations 

Of course, there are limitations. Jamboard, being a Google product, works at is very best when its users all use Google accounts - which is great if your institution's emails are hosted by Google - but less friendly when hosted elsewhere; this then requires your Jamboard to sit on the web publicly. 

Anonymity: There are both pros and cons that come along with anonymity - with anonymous posts the students have freedom to contribute to a "Jam" without fear of judgement, of course, the problem with this is that students may be able to get away without contributing at all. It can be difficult to tell when a student is or isn't engaging.

During a live class, it can be difficult for students that might not be accessing the class on a laptop - without the ability to open new windows to be able to contribute to the "Jam." This poses a real challenge for synchronous use of the tool - where an integrated whiteboard may be preferable. It is important for educators to keep in mind what devices their students may be joining classes using.

Conclusion

Considering Googles Jamboard is free and that it is relatively intuitive to use even for those less tech savvy it can be a powerful ally for teaching - inviting students to contribute with words, images and drawings, creating a place for them to meet for groupwork and form discussion outside of the traditional forums that have long been pillars of Virtual Learning Environments. There are several collaborative digital whiteboards available, so it might be worth investigating if this these tools are something that your institution could incorporate into teaching. 

If you are interested in hearing about first-hand experience lecturing with Jamboard, you can contact Dr Becky Briant (Birkbeck, University of London) at b.briant@bbk.ac.uk or Dr Annie Ockelford (University of Brighton) at a.ockelford@brighton.ac.uk.