The rapid integration of AI into academic life presents both exciting opportunities and significant challenges. For universities and colleges to harness AI's potential, while safeguarding academic integrity and preparing individuals for the future, effective, targeted training for both staff and students is essential. During the most recent meeting of the BLE's Assessment Special Interest Group, we discussed and planned potential training topics for delivering training around the issue of AI and assessment. Here's a look at the suggestions that were made about how institutions approach this critical task.
A central recommendation that was discussed in the workshops is to adopt a broader approach covering the appropriate use of AI for both staff and students. While focusing on tools and their impact for staff, the training should place a specific emphasis on integrity and ethical use for students. This is crucial not only for academic work but also for preparing them for the professional world. Acknowledging and discussing the risks of overreliance on AI – such as poorer academic performance, loss of personal voice, and the assumption that these tools will always be available in the workplace – is vital for both groups.
Key Training Areas:
Training needs to be comprehensive and address multiple facets of AI use:
Academic Integrity and Ethical Considerations: This was a recurring core theme of each group in the session. Training should directly address academic integrity and AI, exploring the necessity of making changes to assessments in light of AI capabilities. Practical sessions focusing on the pitfalls of AI for assessment redesign and academic integrity can help develop a core understanding. Using case studies to show the inherent bias with AI models was suggested as a way to highlight ethical concerns tangibly. Furthermore, it is recommended to make policy more explicit in academic manuals, linking clear guidance to assessment design and the rationale regarding integrity requirements related to degree awarding powers.
Ethics and Critical Literacy: Beyond just using AI, we discussed how individuals need to understand its foundations and limitations. Training can delve into the ethics and geopolitics of AI, covering topics like who owns the tools, the origin and nature of training data, and the limitations of models. Exploring why AI produces certain outputs, including hallucinations, is key. There should be a focus on criticality for everyone, including whether staff apply critical skills to AI output. Information literacy, including AI, was also highlighted as a necessary skill for both staff and students.
Real-World Relevance and Equity: Connecting AI training to careers for both staff and students is important, addressing common concerns about AI's impact on their future. Students, in particular, may be more concerned about career implications than immediate assessment challenges. Discussions about the unequal playing field where students paying for subscriptions might access better models and whether AI use disadvantages those who don’t use it are also crucial ethical points to cover. Staff need guidance on how to make AI use as inclusive and ethical as possible.
Suggested Formats and Audience Considerations:
In the workshop we also discussed how training can be delivered in various ways to cater to different needs and schedules:
Diverse Formats: Options include webinars, face-to-face workshops, and asynchronous resources like self-paced courses, reference guides, FAQs, and blogs. Discipline-specific sessions, organised by faculty or departments, are highly valuable as they can tailor content, such as building in ethics and critical literacy of AI for specific fields. Encouraging case studies and establishing communities of practice can also foster shared learning and collaboration, including potentially collaboration between staff and students on AI use.
Tailored Audiences: Training should target staff, students, and sometimes both groups together. Staff may feel less confident in their AI skills and require a reserved space to ask questions and express worries. Staff also need dedicated training time, with a suggestion that this might be added to workload models in the short term. Asynchronous resources, conversely, might offer more opportunities for shared training between staff and students. Training should also target those not using AI to any great extent to build their confidence.
Effectively training academic communities on ethical AI use requires a multi-faceted approach that addresses integrity, critical thinking, practical application, and real-world concerns, delivered through formats that suit the distinct needs and comfort levels of both staff and students.
A Note About the Writing of This Blog Post
This process was a true collaboration. The ideas and creative input in this post all came from participants in the SIG Assessment group. The post was then drafted by Kirsty Branch and refined with the assistance of AI — in this case, using NotebookLM — to help clarify and bring together everyone's contributions.