Wednesday, 14 April 2021

The curious case of long videos: how research evidence, institutional data and experience struggle to trump gut instincts

This post by Martin Compton from UCL was originally posted on the ALT blog here, https://altc.alt.ac.uk/blog/2021/02/the-curious-case-of-long-videos-how-research-evidence-institutional-data-and-experience-struggle-to-trump-gut-instincts/#gref

The rapid changes to the ways in which most are teaching at the moment have led to some recurring debates that are surprisingly persistent despite what I would argue is strong contrary evidence. Fortunately, colleagues are rarely rude, deliberately divisive, dismissive and provocative like the Times Educational Supplement piece that appeared during the autumn term of 2020 (Anon, 2020). In this article an anonymous academic berated educational ‘evangelists’ for trying to force new teaching ‘fads’ on resentful academics, who apparently burn with resentment at being constantly torn from their research and burdened by inanities like teaching. The colleagues I have in mind, by contrast, are almost universally rational and reasonable and do take teaching seriously.  Nevertheless, there are these recurrent areas where rationality is usurped by a refusal to accept what should be compelling evidence for good practice. As a consequence, they can sometimes find themselves in what I see as an equivalently blinkered position as the provocateur in the TES. My primary focus here will be on discussions about the length of videoed ‘lecture’ content.

Young woman sitting outside in the sun looking at her computer screen, with papers at her side.
Photo by Windows on Unsplash

The enforced ‘pivot’ to emergency remote teaching and the subsequent transitions to online teaching in the academic year 20-21 have ranged from significant to total. The efforts and outcomes have been varied with high-profile complaints centering on a narrative of financial value of online teaching that often mask the quietly successful or, in some cases, transformed approaches. The false equivalence often invoked between fees for ‘just Zoom lectures’ and a Netflix subscription is particularly unhelpful. If one thing is clear to me, it is that the vast majority of academic colleagues have gone way above and beyond, and have adapted with students’ best interests at heart. Much of this has been built on the often understated work of learning technology, instructional design and academic development teams. Even so, one of the most persistent disputes centres around the issue of video duration.

Those of us in support roles have built productive relationships; we are widely trusted; we are persuasive; our credibility is rarely challenged. While debate continues around such things as what constitutes effective and sufficient asynchronous content or the cameras off/ cameras on debate for live sessions, it is the issue of video length for recorded content that most lacks level-headedness. I think it is fair to argue that the research evidence is compelling in terms of the relationship between engagement, viewing time and video length. Guo et al.’s (2014) data from nearly 7 million MOOC videos and Brame’s (2016) connection between video length and cognitive load theory indicate that optimal viewing time is somewhere between 6 and 9 minutes. Institutional data from the lecture capture tool strongly buttresses the research evidence. Additionally, there is the experience of colleagues who have taught online for several years (including me) who can offer compelling experiential cases. Further layered might be evidence from educational videos on YouTube such as the study by Tackett et al. (2018) which found the medical education videos on one successful channel averaged just under 7 minutes and focussed on one core concept. Yet, that optimal time of 6-9 minutes is often received by academics with horror.  

The first and most common counter argument centres on what I would consider to be a false time equivalence between the conventional expectation of lecture length (and content) and the length of videos that might replace them. When I say chunk content I am NOT arguing for 6 x10 minute videos to replace a 1 hour lecture.  If a lecture is scheduled for 1 hour on campus then around 50 minutes of that might be usable for logistical and practical reasons. Of those 50 minutes it is unlikely for those 50 minutes to be crammed with content. There are likely to be cognitive breaks and opportunities for reinforcement in the form of discussions or questions. There is likely to be time for questions from students, time for connections to prior learning, opportunities to elicit latent knowledge and experience, chances to connect the subject to the assessments. None of this need happen in the videos. In discussions with colleagues, we typically conclude that a 50-minute lecture might contain 2 or perhaps 3 key or threshold concepts. These are the essential or ‘portal’ ideas that open doors to broader understanding and that lectures are an excellent medium for. The essential content can thus be presented in much shorter chunks. Say, for the sake of the argument, this is 2 x 10 minutes. 

‘Ah!’ some then say, ‘This is all very well but students will feel short changed!’ There is a huge underlying tension and much of it feeds the ‘refund the fees’ arguments and is actually not assisted by clunky contact time equations. We must not ignore these issues but neither should we pander to them. If we accept the logic of the paragraph above, then we should challenge this conceptualisation. If the alternative is a rambling 60 minute video that the statistics show few will reach the end of only because that’s what students think they have paid for, then we are not working in a research-informed way. To challenge it, we need to share the rationale for our learning designs and tool choices with students; be open with them about our pedagogies; rationalise our approaches. I would argue that we should pre-empt the  ‘value for money’ arguments by talking students through the logic expressed above. Then, for added oomph, layer on the additional benefits:

  • Videos can be paused, rewound and rewatched which also means the pace can be faster and there’s no need for repetition.
  • Videos can increase access and accessibility.
  • The live contact time can be dedicated to i) deeper level, higher order discussion ii) application or analysis of the concepts that are defined in the videos or iii) opportunities for students to test their understanding or to give or receive feedback. 
  • Upload (for lecturers) and download (for students) time is limited and reduces the potential for errors on weak connections or where VLE or video-hosting systems have been struggling.
  • It pushes lecturers to revisit content and to reconsider threshold concepts and vital content.

Finally, it is not uncommon to hear colleagues argue that, despite the evidence from ‘other’ disciplines, students in their discipline like videos that are 1 or 2 hours long. Perhaps because they are perceived to be wired differently, perhaps because it seems intuitive to have fewer videos that they can dip in and out of or perhaps because the students insist that this is their preference. 

A timer indicating 09:59:59, suggesting that the video has slightly overrun the optimal viewing time.
Photo by Markus Spiske on Unsplash

Every time I have a variant of this conversation I am left pondering how it is, in a centre of discovery, in a culture of research, that actual experience, research and learning can be so easily dismissed. And this even before we get into discussions about whether students are adequately predisposed to distinguish what works from what they prefer. I suspect that these sorts of conversations will be familiar to anyone working in an academic development or learning technology support capacity. 

These sorts of conversations have happened with surprising regularity this year, and so receiving positive responses from colleagues who are prepared to consider the evidence, is incredibly rewarding. A senior academic colleague in our Computing department attended one of my online CPD workshops on curating and creating video where this discussion took place. Persuaded by the arguments presented here, he took the short video plunge and was sufficiently impressed with the student feedback that he sent me a summary (unsolicited) of it, where students said:

  • I found the videos really engaging. Having the videos split into sections made it a lot easier to learn.
  • I liked the way the lecture was spilt into different videos because it never felt like it was too long or boring.

I continue to struggle to fully understand what makes video length such a common sticking point. Perhaps the evidence challenges intuition? Perhaps it relates to how committed we are to the lecture/ seminar structures in HE? Whatever it is, it does make the epiphanies and successes like the one described above all the more special. 

References

Anon (2020) Pedagogy has nothing to teach us. Times Educational Supplement. Available: https://www.timeshighereducation.com/opinion/pedagogy-has-nothing-teach-us [accessed 22 January 2021]

Brame, C. J. (2016). Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE—Life Sciences Education15(4), es6.

Guo, P. J., Kim, J., & Rubin, R. (2014, March). How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the first ACM conference on Learning@ scale conference (pp. 41-50).

Tackett, S., Slinn, K., Marshall, T., Gaglani, S., Waldman, V., & Desai, R. (2018). Medical education videos for the world: an analysis of viewing patterns for a YouTube channel. Academic medicine93(8), 1150-1156.

photo of Martin Compton 

Martin Compton is an Associate Professor working in the Arena Centre for research-based education at UCL. email: martin.compton@ucl.ac.uk Twitter @mart_compton 

Wednesday, 31 March 2021

Google Jamboard - an invaluable ally

Lucy Trewinnard, Digital Education Associate at Birkbeck, University of London writes exclusively for the BLE blog about Google Jamboard 

A nationwide move to online teaching saw lecturers put away their dry wipe markers and erasers and start testing out the array of digital whiteboards available to them. 

Digital Whiteboards are not just a replacement for where an educator highlights notes during a class, but they also give the student the pen - inviting collaboration and idea sharing.

What is Google Jamboard and how does it work? 

Jamboard is Google's answer to the digital whiteboard. Aside from being a 55-inch screen hardware you can buy - Jamboard is also browser and app-based piece of software residing in the Google Cloud allowing real-time annotation and collaboration (for free)

A board invites its users to "Jam" by offering the ability to: 

  • Write, draw and mind-map
  • Sketch (Google's own Image recognition technology also boasts it can turn your sketch into a polished image) 
  • Add images straight from Google's image search function
  • Add Google Docs, Sheets or Slides
  • Collaborate - with up to 25 users being able to work on a "Jam" at once. 
  • Backup to the Cloud - the Jamboard's save automatically, meaning that you can re-visit them later.


Digital Whiteboards provide spaces for students to work collaboratively with each other, in both live sessions and out of class. Dr Becky Briant (Department of Geography, Birkbeck, University of London) and Dr Annie Ockelford (School of Environment and Technology, University of Brighton) talk here about their experience teaching with Jamboard - as both a synchronous and asynchronous tool with, used within small and large groups.

 


How do digital browser/app based whiteboards differ from integrated whiteboards (collaborate, MS Teams, Zoom)?

A lot of the platforms being used across higher education institutions already have their own answers to a digital whiteboard. Collaborate Ultra, MS Teams and Zoom all have whiteboard features which can be used effectively in teaching - as a method for collecting students’ thoughts and responses in discussion. However, there are limitations to this - being unable to share images, in most cases there is no ability to save the whiteboards that have been created (which also means no editing later) and not always being large enough for everyone to contribute. 

What is key to Jamboard (or other digital whiteboards used within Digital Education) is the versatility of how these tools can be used as tasks as a feedback, for diagram/image annotation, as a group project area, or live question and answer responses... or just as a space for gathering thoughts. This versatility allows students to engage in discussion dynamically across multiple different learning styles. 

Limitations 

Of course, there are limitations. Jamboard, being a Google product, works at is very best when its users all use Google accounts - which is great if your institution's emails are hosted by Google - but less friendly when hosted elsewhere; this then requires your Jamboard to sit on the web publicly. 

Anonymity: There are both pros and cons that come along with anonymity - with anonymous posts the students have freedom to contribute to a "Jam" without fear of judgement, of course, the problem with this is that students may be able to get away without contributing at all. It can be difficult to tell when a student is or isn't engaging.

During a live class, it can be difficult for students that might not be accessing the class on a laptop - without the ability to open new windows to be able to contribute to the "Jam." This poses a real challenge for synchronous use of the tool - where an integrated whiteboard may be preferable. It is important for educators to keep in mind what devices their students may be joining classes using.

Conclusion

Considering Googles Jamboard is free and that it is relatively intuitive to use even for those less tech savvy it can be a powerful ally for teaching - inviting students to contribute with words, images and drawings, creating a place for them to meet for groupwork and form discussion outside of the traditional forums that have long been pillars of Virtual Learning Environments. There are several collaborative digital whiteboards available, so it might be worth investigating if this these tools are something that your institution could incorporate into teaching. 

If you are interested in hearing about first-hand experience lecturing with Jamboard, you can contact Dr Becky Briant (Birkbeck, University of London) at b.briant@bbk.ac.uk or Dr Annie Ockelford (University of Brighton) at a.ockelford@brighton.ac.uk.

Monday, 29 March 2021

Will Covid-19 finally catalyse the way we exploit digital options in assessment and feedback?

This post by Martin Compton from UCL was originally posted here https://blogs.gre.ac.uk/glt/2020/09/29/digital-assessment-feedback/


The typical child will learn to listen first, then talk, then read, then write. In life, most of us tend to use these abilities proportionately in roughly the same order: listen most, speak next most, read next most frequently and write the least. Yet in educational assessment and feedback, and especially in higher education (HE), we value writing above all else. After writing comes reading, then speaking and the least assessed is listening. In other words, we value most what we use least. I realise this is a huge generalisation and that there are nuances and arguments to be had around this, but it is the broad principle and tendencies here that I am interested in. Given the ways in which technology makes such things as recording and sharing audio and video much easier than even a few years ago (i.e. tools that provide opportunity to favour speaking and listening), it is perhaps surprising how conservative we are in HE when it comes to changing assessment and feedback practices. We are, though, at the threshold of an opportunity whereby our increased dependency on technology, the necessarily changing relationships we are all experiencing due to the ongoing implications of Covid-19 and the inclusive, access and pedagogic affordances of the digital mean we may finally be at a stage where change is inevitable and inexorable.

In 2009 while working in Bradford, I did some research on using audio and video feedback on a postgraduate teaching programme. I was amazed at the impact, the increased depth of understanding of the content of the feedback and the positivity with which it was received. I coupled it with delayed grade release too. The process was: Listen to (or watch) the feedback, e-mail me with the grade band the feedback suggested and then I would return the actual grade and use the similarity or difference (usually, in fact, there was pretty close alignment) to prompt discussion about the work and what could be fed forward. A few really did not like the process but this was more to do with not liking the additional process involved in finding out the grades they had been given rather than the feedback medium itself. Only one student (out of 39) preferred written feedback as a default and this included three deaf students (I arranged for them to receive BSL signed feedback recorded synchronously with an interpreter while I spoke the words).  Most of the students not only favoured it, they actively sought it. While most colleagues were happy to experiment or at least consider the pros, cons and effort needed, at least one senior colleague was a little frosty, hinting that I was making their life more difficult. On balance, I found that once I had worked through the mechanics of the process and established a pattern, I was actually saving myself perhaps 50% of marking time per script though there certainly was some front-loading of effort necessary for the first time.  I concluded that video feedback was powerful but, at that time, too labour- and resource-intensive and stuck with audio feedback for most of the students unless video was requested or needed. I continued to use it in varying ways in my teaching, supporting others in their experimentation and, above all, persuading the ‘powers that be’ that it was not only legitimate but that it was powerful and, for many, preferable. I also began encouraging students to consider audio or video alternatives to reflective pieces as I worked up a digital alternative to the scale-tipping professional portfolios that were the usual end of year marking delight.

Two years later I found myself in a new job back in London and confronted with a very resistant culture. As is not uncommon, it is an embedded faith and dependency on the written word that determines policy and practice rather than research and pedagogy. In performative cultures, written ‘evidence’ carries so much more weight and trust, apparently irrespective of impact. Research (much better and more credible than my own) has continued to show similar outcomes and benefits (see summary in Winstone and Carless, 2019) but the overwhelming majority of feedback is still of the written/ typed variety. Given the wealth of tools available and the voluminous advocacy generated through the scholarship of teaching and learning and potential of technology in particular (see Newman and Beetham, 2018, for example), it is often frustrating for me that assessment and feedback practices that embrace the opportunities afforded by digital media seemed few and far between.  So, will there ever be a genuine shift towards employing digital tools for assessment design and feedback? As technology makes these approaches easier and easier, what is preventing it?  In many ways the Covid-19 crisis, the immediate ‘emergency response’ of remote teaching and assessing and the way things are shaping up for the future have given a real impetus to notions of innovative assessment. We have seen how many of us were forced to confront our practice in terms of timed examinations and, amid inevitable discussions around the proctoring possibilities technology offered (to be clear: I am not a fan!), we saw discussions about effective assessment and feedback processes occurring and a re-invigorated interest in how we might do things differently.  I am hoping we might continue those discussions to include all aspects of assessment from the informal, in-session formative activities we do through to the ’big’, high-stakes summatives.

Change will not happen easily or rapidly, however. Hargreaves (2010) argues that a principal enemy of education change is social and political conservatism and I would add to that a form of departmental, faculty or institutional conservatism that errs on the side of caution lest evaluation outcomes are negatively impacted.  Covid-19 has disrupted everything and whilst tensions remain between the conservative (very much of the small ‘c’ variety in this context) and change-oriented voices, it is clear that recognition is growing of a need to modify (rather than transpose) pedagogic practices in new environments and this applies equally to assessment and feedback. In the minds of many lecturers, the technology that is focal to approaches to technology enhanced learning is often ill-defined or uninspiring (Bayne, 2015) and the frequent de-coupling of tech investment from pedagogically informed continuing professional development (CPD) opportunities (Compton and Almpanis, 2018) has often reinforced these tendencies towards pedagogic conservatism. Pragmatism, insight, digital preparedness, skills development, and new ways of working through necessity are combining to reveal a need for and willingness to embrace significant change in assessment practices.

As former programme leader of an online PGCertHE (a lecturer training programme) I was always in the very fortunate position to collect and share theories, principles and practices with colleagues, many of whom were novices in teaching. Though of course they had experienced HE as students they were less likely to have had a more fossilised sense of what assessments and feedback should or could look like. I also have the professional and experiential agency to draw on research-informed practices not only by talking about them but through exemplification and modelling (Compton and Almpanis, 2019). By showing that unconventional assessment (and feedback) are allowed and can be very rewarding we are able to sow seeds of enthusiasm that lead to a bottom-up (if still slow!) shift away from conservative assessment practices. Seeing some colleagues embrace these strategies is rewarding but I would love to see more.

References 

Bayne, S. (2015). ‘What’s the matter with ‘technology-enhanced learning?’ Learning, Media and Technology, 9 (1), 251-257.

Bryan, C., & Clegg, K. (Eds.). (2019). Innovative assessment in higher education: A handbook for academic practitioners. Routledge.

Compton, M. & Almpanis, T. (2019) Transforming lecturer practice and mindset: Re-engineered CPD and modelled use of cloud tools and social media by academic developers. Chapter in Rowell, C (ed.) Social Media and Higher Education: Case studies, Reflections and Analysis. Open Book Publishers. 

Compton, M., & Almpanis, T. (2018). One size doesn’t fit all: rethinking approaches to continuing professional development in technology enhanced learning. Compass: Journal of Learning and Teaching,11(1).

Hargreaves, A. (2010). ‘Presentism, individualism, and conservatism: The legacy of Dan Lortie’s Schoolteacher: A sociological study’. Curriculum Inquiry, 40(1), 143-154.

Newman, T. and Beetham, H. (2018) Student Digital Experience Tracker 2018: The voices of 22,000 UK learners. Bristol: Jisc.

Winstone, N., & Carless, D. (2019). Designing effective feedback processes in higher education: A learning-focused approach. Routledge.

Monday, 22 March 2021

Pandemic pedagogy: in praise of collaboration and compassion

This post, written by Samantha Ahern from UCL, was first published on the ALT blog, https://altc.alt.ac.uk/blog/2021/03/pandemic-pedagogy-in-praise-of-collaboration-and-compassion/ 

In responding to the pandemic and how it has impacted higher education, now more than ever there has been a need for and move toward collaboration and partnerships within our institutions.

The academic mission has only been made possible by these collaborations. Educational developers working with digital education and faculty teams, supported by underpinning Professional Services. Certainly, my role as a faculty learning technology lead would not be possible without it.

It has been a tough year for everyone. I am saddened by some recent articles criticising faculty for not embracing new technologies or not seeing them as creators of learning content and materials. Because in my experience this is just not true.

Yes, there are always some colleagues who take more of a tortoise approach to developing their digital pedagogy. And there are also those that are hares, always trying something new, pushing boundaries.

But the most important thing is that is that whatever approach they take, it’s considered. It’s pedagogy rather than technology driven. It is accessible in both senses of the term for all students so that they have an equitable learning experience, and it is ethical.

Yes, there are some amazing technologies that are becoming available to us, and some have great promise. But are they the right thing, at the right time for the right reason.

It’s too easy to see many modern learning technologies and digital platforms as mere tools, instruments for our use. But they are more than that. They are human made, constructed into them are their creators ideas of what education should be, how it should be moulded, what is important and crucially all our biases conscious and sub-conscious. I am never in a rush to adopt the latest shiny new thing.

Don’t get me wrong. There are platforms, equity considerations and approaches to designing blended and online learning that I would like faculty colleagues to engage with. But much like our students I know that they are in a different place in their learning and development. I choose what to champion and when, based on my knowledge of them and their needs. I identify what is the one thing I would like them to focus on. There are many things I could ask them to adjust, tweak, and change. But in know that it’s not always appropriate. I hope my interactions, support and interventions are compassionate and empowering.

I have been heartened by the efforts of all colleagues. Particularly by the ownership of online learning spaces and the desire to create good learning experiences for students. Those who teach and support teaching activities have always been both content creators and curators. For many this year, the types of content they have created and the considerations that requires have been completely new. Not only has a new way of thinking been required, but also a new way of doing and in very short space of time.

I really hope that these collaborations, experimentations and pedagogic re-evaluations / considerations will continue post pandemic.

So yes, maybe, things have not been approached in the way they would have been by a learning design expert or specialist content creator? But, so what. Instead of criticising colleagues for what they aren’t doing, why not celebrate them for what they have done and support them in continuing to do so.

Samantha Ahern @2standandstare

Samantha’s approach to learning design is shared in this blog post and she clearly believes in the importance of putting people before processes. She also writes this about digital wellbeing and shares practical advice about coping during Covid.
Samantha Ahern (FHEA, ACMALT) @2standandstare
Faculty Learning Technology Lead (Bartlett),
Digital Education, Information Services Division, University College London

Monday, 8 March 2021

Let’s admit that students may have learned less

This post was first published on Wonkhe, https://wonkhe.com/blogs/lets-admit-that-students-have-learned-less/

CDE Fellow, David Baume says we should acknowledge the Covid-19-shaped asterisk beside students' qualifications this year.

Gavin Williamson said something very helpful on February 25. Stay with me.
He said that 2021 A level students will be assessed on what they had been taught, not on what they had missed.
Maybe he will take this to its logical conclusion. Perhaps A level results will be accompanied by a transcript showing not just marks and grades in subjects, but additionally what outcomes students had achieved. Assuming that there is a close relationship between learning outcomes and assessment tasks, this could be done largely mechanically rather than individually.
This transcript could be accompanied by an account of what they had been taught. The users of A level results could then compare these two accounts to the syllabus, and thus be able to make better use of the results. They could calibrate their expectations of what grades mean this year. They could be clear in which subset of the syllabus each student had shown proficiency, and in which not. And be clear in which subsets it would be unreasonable to expect students to demonstrate attainment, because that subset had not been taught.
Students graduating from university in 2020, 2021 and (insert your own closing date here) will, in their own minds and in the minds of employers and others, have a large Covid-19-shaped asterisk by their degree. The Minister’s plan, with my modest extension to it, will mean that we know what this asterisk means for A-levels.
The obvious question is – what will that asterisk mean for university graduates?
A better question may be – what will graduates, employers and others take that asterisk to mean?
The best kind of question, a Wonkhe-type question, may be – what kind of assessment policy would help here?

Deconstructing education and assessment


The University contract of education with the student may be, essentially, this:
  • Universities admit students who will have a reasonable prospect of graduating;
  • Universities teach students what the students need to know or be able to do, and how well, to enable them to graduate, and more broadly support them to learn;
  • Students, meanwhile, do the necessary listening and studying and learning; and
  • Most of those students get some sort of a degree.
Similarly crudely, there may be two dimensions to assessment judgments:
  • Quantity – how much have the students learned / how much can they do?
  • And quality – how well do they know it / how well can they do it?
Talk of standards in assessment generally and unhelpfully crunches these two dimensions of assessment together.

A giant finger of fudge


There are proposals to convert that Covid-shaped asterisk into a giant finger of fudge. The idea is that we adjust / inflate / normalize students’ grades. In other words, that we pretend that they have learned and achieved more / better than they have actually learned and achieved. In fact, we pretend that they have learned pretty much what they would have learned in a non-Covid year.
Of course, we fudge assessment anyway. Under the seemingly benign label of “norm-referenced” assessment, we fudge marks up or down, each year, so that, on balance, overall, more or less, the distribution of marks looks pretty much the same as it did last year.
But what happens when we fudge marks upwards in Covid times? We don’t fool anybody.
Universities, students and employers all know that graduates from the Covid years may well have learned less, and / or learned to perform less well, than graduates from previous years, because teaching and learning were disrupted.
Employers can cope – they will adjust their view of the students, to reflect what they turn out actually to have learned, what they actually know, what they can actually do, and how well. That will confirm the already widely-held (and pretty accurate) view that university grades are measured on rulers made mostly from rubber, although that’s probably not a message that universities, as among other roles guardians of standards, want to reinforce.
Universities, who in this respect are being flexible with the truth, and the students, who know that their current state of knowledge and capabilities are probably being flattered by their award, are both likely to feel a bit grubby. This finger of fudge is no treat at all.
Of course, some students may do better under the new regime. Even if the questions are the same, a Covid-era twenty-four-hour unseen open-world un-invigilated exam, answered via a keyboard, is a very different task from the pre-Covid two-hour unseen invigilated handwritten version.

Tell the truth


Alternatively, we could mark against the same inviolable-ish standards we use every year, before we norm reference. What would happen?
Most students would (presumably) get somewhat lower grades in the Covid years (although see the caution above about assessment methods). Because they had learned less. Because their teaching and learning had been disrupted.
Employers, and others who use degree qualifications as a basis for selection, might have to change their selection criteria – reduce their expectations of graduates; maybe also provide their new recruits with more initial training and development to fill the Covid-19-induced gaps in what their incoming graduates had learned. Some students will have managed to learn very well, despite the disruptions to their teaching.

“But that’s not fair!”

There would be concerns about fairness – but Covid isn’t fair, and the disruption to education caused by restrictions arising from it isn’t fair. The fact that some students are better than others at independent learning isn’t fair. The fact that some universities have done a better job at taking education online than others isn’t fair. The fact that different assessment methods may have been used isn’t fair. Very little here is fair.
We can’t honestly “no-detriment” for Covid. Nor can we do the opposite, no-unfair-advantage. Covid happened, is happening and will continue to happen, for a while yet. And it has detrimental effects, and maybe a few positive effects, probably different effects on different students, subjects, courses, universities.
Can we meaningfully safety-net? To some extent.
We can surely allow more preparation time for assessment, perhaps early sight of the paper – very few learning outcomes specify preparation time, although assessment regulations may. Regulations can be reviewed, and changed, as long as we acknowledge that regulations affect the assessment task.
We can surely allow more resubmissions – very few learning outcomes say that the outcome has to be achieved at the first or second attempt, although again current regulations may.
But I don’t think we can sensibly safety-net by, for example, adjusting everyone’s post-mid-March-2020 marks to line up with the marks they were getting before that date.
Marks mean outcomes and standards achieved, things learned, capabilities developed and demonstrated. To adjust in this way would be implicitly to pretend that students had learned things they hadn’t learned, learned to do better than they had actually done, achieved outcomes that they hadn’t achieved.

Discomfort

Telling the truth here may feel uncomfortable. That may be because we confuse grades with worth, with intelligence, with academic ability, with other desirable qualities. A student is not a worse human being because they got a lower rather than an upper second. A student who might otherwise, in a non-Covid year, have got an upper second is not academically poorer or weaker just because, in a Covid year, they got a lower second.
They may have different (less) current knowledge, different (fewer) current capabilities, different (lower) current levels of proven ability. Because their teaching and learning were disrupted. And so they have different development needs for whatever they decide to do next. They are the same person. With the same potential. They may just be a few months behind.

Policy implications?

We should be truthful. Let’s acknowledge and declare that the Covid-19-shaped asterisk beside their qualification means that, because of the pandemic, their teaching was disrupted, and so they (may have) learned less. No shame.
If we exaggerate what has been learned; if we fudge what Covid-era qualifications actually mean – then we damage the credibility of universities. I don’t think we want to do that.
The 2021 A-level approach may show a way, with my modest extensions to it. A transcript, showing what students have actually learned, and what they have been taught, and under what conditions they have demonstrated what they have learned. No fudge.