Tuesday, 5 November 2024

Jisc Digital Students Experience Experts Meeting

Dr Fiona Harvey, Faculty Learning Technology Lead for Social and Historical Sciences at UCL, shares her experience of attending a recent Jisc meeting, to which the BLE also contributed.

A couple of weeks ago we attended the JISC network meeting for Student Experience. Obviously, being JISC the focus is always on the effective use of digital educational technology to support and enhance students academic journeys. This was the first meeting I had attended since 2018 and it was great to see familiar faces and meet new people.  It was the first time I had spoken in person since the pandemic, and the JISC meetings are always a good place to do that.  There was a group of about 50 people and from the moment you walked into the room it was friendly and welcoming. Natasha Catnott, our Faculty EDI Coordinator, attended with me as we were presenting the outcome of The SHS International Students Digital Experiences project (report on the way). Natasha remarked about how friendly everyone was.

The focus for this meeting was on the JISC International Students work, which has produced a series of reports, briefing papers, and articles, and contributors to the day included some of the institutions involved in these projects. There were also presentations from those of us who have been inspired by the reports and articles and included The Bloomsbury Learning Exchange (Sarah Sherman), who introduced their new Mooc  ‘Is a PhD Right for Me?, which is designed to encourage participation from a whole range of different groups of students, especially under-represented students.

Having been part of this network for many years, it was interesting to see who was now attending these events. In the past it was digital education folks, those who were actively supporting and introducing educational technology and included academic staff as well as educational developers and some learning technologists.  This event was different in that there were representatives from across support services for students who were actively considering how digital technologies are applied in the context of their work.  It was the first time that we were asked “How many of you are actively involved in your institutional Access and Participation Plan?”  I thought that was an interesting question to ask but completely relevant. My recent research on how we perceive and make decisions on behalf of non-traditional students reflected on that very point. We must demonstrate a whole institutional approach and yet many not directly involved in preparing their APP feel that it is nothing to do with them. 

The themes for the day were inclusivity and belonging which was exciting and interesting to hear about.  Particularly how the impact of educational technology and administrative systems that students need to engage with can impact the student experience. From the presentations and the discussions during the day it was clear that this work had shone a light on the kind of support we provide for the ‘standard student’ (white, middle class and domestic) and how that doesn’t allow for any kind of difference. It sounds obvious when you say it out loud, but there are so many assumptions made about students prior digital experiences, and we just don’t make it clear enough to our students from different cultures about what happens when they come to the UK. JISC called these experiences ‘Digital Shocks’ which is a good

way of describing it. When we spoke to students in our study, they didn’t describe shocks, but they were confused and bewildered about the unwritten rules of engagement with non-university based systems, especially social media.

DMUL Sumeya Loonat
Sumeya Loonat. DMUL

The presentations that focused on Belonging were creative and inspiring. There was a thoughtful approach taken to helping students become accustomed to the ‘ways of digital’ for their UK institutions. Sumeya Loonat from De Montford University introduced us to the Business and Law faculty initiatives to help students  feel part of the university by using digital technology to help them cope with the various challenges of being in a new place and not knowing anyone. Finding out about the city (Leicester) by producing maps and finding out about each other by creating a Digital Cookbook (such a great idea) and sharing tips and experiences through podcasts. All of these initiatives help to build a community and a supportive environment.

Patrice Seuwou
Dr Patrice Seuwou, Northampton University

Dr Patrice Seuwou from Northampton talked about their strong focus on social justice, equality and inclusion and made a strong case for raising awareness across the support services for international students as they recognised that the digital challenges were in addition to many cultural challenges their students faced. There was a clear emphasis on Global Ethnic Majority (GEM) students and they had made changes to the information they provided students before they arrived as a result of their involvement with the JISC projects. They also made use of the JISC Digital Insights surveys to monitor and enhance their students digital experiences. They hoped to focus on developing their digital education services to include a more integrated approach. This will, of course, help all students.

With a focus on student partnership, Plymouth University was represented by Hannah Miles and Dr Helen Bowstead, who built on their raft of peer learning activities (including PALS, Hello Project, Digital Champions, and Writing Cafe Mentors) by bringing in a focus on international students. They had run a project called Transitioning Successfully into University as a way of identifying the gaps they had for international students and used their existing student networks to help bridge these gaps. Again, they mentioned the different levels of digital skills and literacies that these students had that had previously gone unaddressed.

Simon Golshetti and Juan Raman Mullor from Staffordshire University talked about their research project for postgraduate students looking into challenges around assessments, learning resources, and skills. Their work is still underway, but they highlighted the barriers that their students faced in terms of cultural forms of assessment practice and experiences.  Their mission was to improve student outcomes for international students, and in turn, this will help all students.

Student Experience panel discussion

There were also panels and discussions throughout the day. The overriding message was to unpick assumptions we make about who we ‘think’ these students are and what they know about how we do things. There were also links on the value of cross institutional teams, such as widening participation, digital education and IT departments bringing their own perspectives (and assumptions), and the benefits we all bring when we work together.

 

It was a valuable experience, and we enjoyed meeting new people (we already have a meeting set up with another university). It has spurred us on to think about our own projects on belonging and digital experiences for all students and reminded us that we should be mindful that not all international students are as prepared for the UK digital education experience as we have assumed and that we need to explicitly think about what we can do to smooth any challenges to aid their academic success.


Friday, 25 October 2024

Birkbeck's GenAI survey of academics

Rishi Shukla, one of Birkbeck's Digital Education Consultants, shares findings of a survey of academic staff regarding their students' use of Generative Artificial Intelligence.

Overview 

Birkbeck’s Digital Education team issued a short questionnaire on the teaching body’s experiences of monitoring and addressing use of generative artificial intelligence (GenAI) in coursework and exam submissions. The questionnaire was completed by 17 staff, with participation from all faculties: 

 

Graph 1: Distribution of survey respondents by Faculty 

 

Questions were focussed on two areas. The first area focused on informal methods used by staff for identifying likely misuse of GenAI. The second area addressed adaptations to teaching and learning methods and materials made by staff in response to the growth of GenAI tools. Findings related to each area are detailed in the following sections, which highlight commonly identified issues and include representative comments drawn directly from survey responses.  In this summary report, we'll be breaking down the responses question by question for the survey below. 

Spotting suspected GenAI use 

What words/phrases do you find prevalent in student submissions where AI use is suspected? 

A common observation amongst responders was that there were not necessarily any specific words or phrases that acted as a ‘red flag’ in themselves. More typically, staff tended to note that aspects of the vocabulary, tone or authorial voice were likely to raise concerns. The table below shows the aspects of vocabulary that raised concerns. 

repetition 

“Multiple repetitions of words or phrases seems a potential giveaway” 

exaggeration 

“Words that over emphasize or overstate the significance of a point made” 

vagueness 

“Students using AI use buzzwords of the discipline without really engaging with them” 

assuredness 

“It's the breezy/polished tone of someone who is very well practiced in churning out this style of prose that contrasts markedly with how our students typically speak and write” 

Some potential trigger words or phrases were also noted by a smaller proportion of respondents (including staff from all three faculties), which included: “In conclusion …”, “There are [X] main points …”, “Certainly …”, “overall”, “delve”, “some”, “others”. 

Please describe if there are any grammatical clues you often find in student submissions where AI use is suspected? 

The most widely shared observation was that grammatical composition of individual sentences tended to be (overly) perfected. Multiple respondents also noted that such uncharacteristically precise use of English is often accompanied by either extreme verbosity (“use of suspiciously many adjectives”), excessive formality (“older words/phrases not in common usage”), weak argument structure (“making similar points with different wording in different sections of an essay”), or insufficient depth (“lacking in actual content beyond a few superficial facts”). 

Please describe the nature of any visual clues of suspected AI usage in student submissions? 

Two visual markers were highlighted repeatedly by several survey respondents. 

The first sign was superfluous or inappropriate use of bullet points, or very short paragraphs tantamount to the same. It is worth noting the possibility that students may deliberately construct submissions in this way, as an attempt to circumvent detection tools. Bullet points are not currently classed as “qualifying text” by the Turnitin AI writing indicator, so these passages are not subject to the same level of automated scrutiny[1]. There may be some level of awareness amongst students about this current limitation of Turnitin. 

The second indicator was a reference list that appears after the body of the work (rather than inline within the text) or even single, ‘catch-all’ citations at the end of paragraphs using a generic source (i.e. textbook). One member of staff also highlighted mixed use of US/UK spellings as another visual warning (e.g. “analyse” and “analyze”, “dialogue” and “dialog”). 

Have you found examples of fabricated references? How did you find out these references were fabricated? 

Eleven (65%) of the survey respondents had encountered fabricated references. A variety of scenarios for the extent or nature of the fabrication were noted by multiple respondents, including: 

  • citations that did not include full publication details (“repeated use of 'Smith (2023); Jones (2023); Brown (2023)' as in text citations but without a bibliography provided”) 

  • valid sources that were not pertinent to the topic (“some references are cited but have no relevance to the subject material”) 

  • entirely bogus constructs (“combinations of authors that have not published together, a plausible but non-existent title, a real journal and volume […] but page numbers that don't exist”) 

Respondents conveyed clearly that each type of fabrication indicated a likelihood that the submission had been produced with use of GenAI. Two respondents noted explicitly that use of fabricated references was the most reliable evidence of GenAI misuse. 

Staff reported using a combination of domain/subject knowledge for initial validation of reference integrity, followed by manual checking of any suspect or unfamiliar references through online search. One respondent noted that more recent iterations of GenAI tools have improved such that these tell-tale signs may no longer be so prevalent: 

“I was interested in the results the student claimed were in the paper so I tried to access it. The journal was legitimate and the year and volume numbers corresponded, but there was no such paper on the claimed page numbers.  I checked the alleged authors' websites (they were genuine academics) and saw no sign of this or any related paper, and googling the title produced no hits.  Note that this occurred in 2023 whereas in 2024 I am no longer seeing fabricated references on suspect papers (I have done several checks and found only legitimate books/articles.)” 

Amending teaching and learning 

Have you amended your teaching content as a result of the increase in the use of AI? If so, how? 

Nine (53%) of the survey respondents had amended their teaching in response to an apparent increase in the use of GenAI. Staff provided a wide variety of shared measures they’d taken in response to these trends: 

  • highlighting institutional policy and the consequences of GenAI misuse (“[…] incorporating warnings that AI usage is treated as plagiarism at Birkbeck”) 

  • demonstrating and explaining the flaws in content produced with GenAI (“I have shown the students examples of AI generated content […] and had them critique why the AI has done a bad job”) 

  • shifting the learning emphasis to modes that are more robust to GenAI misuse (“I have put greater emphasis on being able to draw original diagrams to explain scientific content”) 

  • adapting course design so that learning builds directly towards assessment (“[…] making the teaching of a topic even more tightly tied to the related assignment”) 


Have you amended your assessment methods as a result of in the increase in the use of AI? If so, how? 

Nine (53%) of the survey respondents had amended their assessment methods in response to an apparent increase in the use of GenAI. Seven of these nine were from the set of respondents that had also amended their teaching content. Three main strategies were apparent for adapting assessments: 

pre-test AI output 

“I think carefully about whether questions could be answered by AI, and sometimes check ChatGPT directly to see what sort of answer it provides” 

revise criteria 

“… emphasize critical reflection on content rather than the content itself; I also ask students to refer specifically to course materials that are only available on Moodle …” 

revise assessment 

“I give 50% of the marks for text and 50% for hand-drawn annotated diagrams where I expect the handwritten annotation to explain the relevance of the diagram to the text, and the overall answer” 

 

Have you allowed the use of AI in your assessment? If so, how? 

Three (18%) of the survey respondents had permitted some use of GenAI in their assessments. All three indicated that clear parameters had been laid out on how the technology could be used (i.e. only to generate seed information, to create a plan, or in place of a search engine for research purposes) and that output from GenAI tools could categorically not be used as the basis of a submission. 

Future work 

Future work in this area could include running this survey regularly to increase the numbers/information gathered as AI tools and students usage of AI Tools also evolves. Whilst we would have liked to have looked into disciplinary differences, we would have needed larger numbers of respondents from across the faculties. This is also something that can be investigated further should the survey run again.  

Another area this work will influence is the training and guidance offered by the Digital Education team. The Digital Education team in the coming weeks will be releasing a Sharepoint Connect site, with information on ways to incorporate AI into teaching, learning and assessment as well as ways to rework assessment across the module/programme. The Digital Education team also offers training on interpreting the AI Indicator and will be adding and amending more sessions and workshops on assessments in the coming months. 

Summary 

From a small sample of Birkbeck teaching staff, this report provides a snapshot of the different experiences and adaptations underway due to the growing influence of GenAI over student learning activity. 

The first section presents a number of common indicators that may give rise to concerns around misuse of GenAI. The second section highlights a range of different initiatives that staff have already undertaken, individually, to steer teaching and assessment methods in directions that successfully address these new challenges.  

These two sets of insights provide valuable food for thought and inspiration for other Birkbeck teaching staff, on how they may respond to the growing availability and ever-changing capability of GenAI tools. 

Acknowledgments 

The data evaluation and write-up was done by Rishi Shukla and the survey and review of the evaluation conducted by Yahya Saleh from the Digital education team.  

References 

1 - Turnitin (2024). AI writing detection in the classic report view. Available at: https://guides.turnitin.com/hc/en-us/articles/28457596598925-AI-writing-detection-in-the-classic-report-view#h_01J2XZ72FDQQG4C1SET16FXZJ4 (Accessed: 10 October 2024)