This post by Martin Compton from UCL was originally posted here https://blogs.gre.ac.uk/glt/2020/09/29/digital-assessment-feedback/
The typical child will learn to
listen first, then talk, then read, then write. In life, most of us tend to use
these abilities proportionately in roughly the same order: listen most, speak
next most, read next most frequently and write the least. Yet in educational
assessment and feedback, and especially in higher education (HE), we value
writing above all else. After writing comes reading, then speaking and the
least assessed is listening. In other words, we value most what we use least. I
realise this is a huge generalisation and that there are nuances and arguments
to be had around this, but it is the broad principle and tendencies here that I
am interested in. Given the ways in which technology makes such things as
recording and sharing audio and video much easier than even a few years ago
(i.e. tools that provide opportunity to favour speaking and listening), it is
perhaps surprising how conservative we are in HE when it comes to changing
assessment and feedback practices. We are, though, at the threshold of an
opportunity whereby our increased dependency on technology, the necessarily
changing relationships we are all experiencing due to the ongoing implications
of Covid-19 and the inclusive, access and pedagogic affordances of the digital
mean we may finally be at a stage where change is inevitable and inexorable.
In 2009 while
working in Bradford, I did some research on using audio and video feedback on a
postgraduate teaching programme. I was amazed at the impact, the increased
depth of understanding of the content of the feedback and the positivity with
which it was received. I coupled it with delayed grade release too. The process
was: Listen to (or watch) the feedback, e-mail me with the grade band the
feedback suggested and then I would return the actual grade and use the
similarity or difference (usually, in fact, there was pretty close alignment)
to prompt discussion about the work and what could be fed forward. A few really
did not like the process but this was more to do with not liking the additional
process involved in finding out the grades they had been given rather than the
feedback medium itself. Only one student (out of 39) preferred written feedback
as a default and this included three deaf students (I arranged for them to
receive BSL signed feedback recorded synchronously with an interpreter while I
spoke the words). Most of the students not only favoured it, they
actively sought it. While most colleagues were happy to experiment or at least
consider the pros, cons and effort needed, at least one senior colleague was a
little frosty, hinting that I was making their life more difficult. On balance,
I found that once I had worked through the mechanics of the process and
established a pattern, I was actually saving myself perhaps 50% of marking time
per script though there certainly was some front-loading of effort necessary
for the first time. I concluded that video feedback was powerful but, at
that time, too labour- and resource-intensive and stuck with audio feedback for
most of the students unless video was requested or needed. I continued to use
it in varying ways in my teaching, supporting others in their experimentation
and, above all, persuading the ‘powers that be’ that it was not only legitimate
but that it was powerful and, for many, preferable. I also began encouraging
students to consider audio or video alternatives to reflective pieces as I
worked up a digital alternative to the scale-tipping professional portfolios
that were the usual end of year marking delight.
Two years later I
found myself in a new job back in London and confronted with a very resistant
culture. As is not uncommon, it is an embedded faith and dependency on the
written word that determines policy and practice rather than research and
pedagogy. In performative cultures, written ‘evidence’ carries so much more
weight and trust, apparently irrespective of impact. Research (much better and
more credible than my own) has continued to show similar outcomes and benefits
(see summary in Winstone and Carless, 2019) but the overwhelming majority of
feedback is still of the written/ typed variety. Given the wealth of tools
available and the voluminous advocacy generated through the scholarship of
teaching and learning and potential of technology in particular (see
Newman and Beetham, 2018, for example), it is often frustrating for me
that assessment and feedback practices that embrace the opportunities afforded
by digital media seemed few and far between. So, will there ever be
a genuine shift towards employing digital tools for assessment design and
feedback? As technology makes these approaches easier and easier, what is
preventing it? In many ways the Covid-19 crisis, the immediate ‘emergency
response’ of remote teaching and assessing and the way things are shaping up
for the future have given a real impetus to notions of innovative assessment.
We have seen how many of us were forced to confront our practice in terms of
timed examinations and, amid inevitable discussions around the proctoring
possibilities technology offered (to be clear: I am not a fan!), we saw
discussions about effective assessment and feedback processes occurring and a
re-invigorated interest in how we might do things differently. I am
hoping we might continue those discussions to include all aspects of assessment
from the informal, in-session formative activities we do through to the ’big’,
high-stakes summatives.
Change will not
happen easily or rapidly, however. Hargreaves (2010) argues that a
principal enemy of education change is social and political conservatism
and I would add to that a form of departmental, faculty or institutional
conservatism that errs on the side of caution lest evaluation
outcomes are negatively impacted. Covid-19 has disrupted everything and
whilst tensions remain between the conservative (very much of the small ‘c’
variety in this context) and change-oriented voices, it is clear that
recognition is growing of a need to modify (rather than transpose) pedagogic
practices in new environments and this applies equally to assessment and
feedback. In the minds of many lecturers, the technology that is focal to
approaches to technology enhanced learning is often ill-defined or uninspiring
(Bayne, 2015) and the frequent de-coupling of tech investment
from pedagogically informed continuing professional development (CPD)
opportunities (Compton and Almpanis, 2018) has often reinforced these
tendencies towards pedagogic conservatism. Pragmatism, insight, digital
preparedness, skills development, and new ways of working through necessity are
combining to reveal a need for and willingness to embrace significant change in
assessment practices.
As former
programme leader of an online PGCertHE (a lecturer training programme) I was
always in the very fortunate position to collect and share theories,
principles and practices with colleagues, many of whom were novices
in teaching. Though of course they had experienced HE as students they
were less likely to have had a more fossilised sense of what assessments and feedback
should or could look like. I also have the professional
and experiential agency to draw on research-informed practices
not only by talking about them but through exemplification and
modelling (Compton and Almpanis, 2019). By showing that
unconventional assessment (and feedback) are allowed and can be
very rewarding we are able to sow seeds of enthusiasm that lead to a
bottom-up (if still slow!) shift away from conservative assessment
practices. Seeing some colleagues embrace these strategies is rewarding
but I would love to see more.
References
Bayne, S. (2015).
‘What’s the matter with ‘technology-enhanced learning?’ Learning, Media and Technology, 9
(1), 251-257.
Bryan, C., & Clegg,
K. (Eds.). (2019). Innovative
assessment in higher education: A handbook for academic practitioners.
Routledge.
Compton, M. &
Almpanis, T. (2019) Transforming lecturer practice
and mindset: Re-engineered CPD and modelled use of cloud tools and social media
by academic developers. Chapter in Rowell, C (ed.) Social
Media and Higher Education: Case studies, Reflections and Analysis. Open Book
Publishers.
Compton,
M., & Almpanis, T. (2018). One size doesn’t fit all: rethinking approaches
to continuing professional development in technology enhanced
learning. Compass: Journal of Learning and Teaching,11(1).
Hargreaves, A.
(2010). ‘Presentism, individualism, and conservatism: The legacy of Dan
Lortie’s Schoolteacher: A sociological study’. Curriculum
Inquiry, 40(1), 143-154.
Newman, T. and
Beetham, H. (2018) Student
Digital Experience Tracker 2018: The voices of 22,000 UK learners.
Bristol: Jisc.
Winstone, N., & Carless, D. (2019). Designing effective feedback processes in higher education: A learning-focused approach. Routledge.
No comments:
Post a Comment