Very pleasant working day in Kristianstad yesterday, so today I am doing a sprint, reading stuff about academic misconduct, trust, and GenAI! Here we go with lots of short summaries.
This is really interesting in the context with the fear about students cheating more now that GenAI is available: What does make students cheat? (see also the Adnin et al. (2025) work on undisclosed use of GenAI)
Dahl & Waltzer (2024) describe that people commonly assume that the reasons for cheating are that people either don’t care about honesty and integrity, or that they manage to convince themselves that they have good excuses for it. But they argue that cheating is actually not happening with morality “switched off”, but that it is rather an engagement with morality.
Dahl & Waltzer (2024) find that cheating happens when there are
I read this study because the next one below cites it (together with another one from 2017 that I did not look into, because it is clearly pre-AI), claiming that “more students will cheat because AI technology makes it easier to do so, even when students do not trust the technology”. But that claim is not actually made in this article!
Many universities, including mine, rely on teachers to create, communicate, and enforce course-specific GenAI policies. Petricini et al. (2025) explore where those policies come from and how the conversations with students happen.
They find that 39% of respondents use rule-based approaches, “adhering to strict institutional policies, often relying on punitive measures, like academic sanctions and AI detection tools to enforce compliance“, 25% use integrity-focussed approaches, which “emphasize fostering students’ ethical decision-making, prioritizing conversations and trust-building over punishment“, and for 20%, a new-to-the-authors, collaborative approach emerged: “chance to engage students in ethical decision-making and course policy development, rather than strictly enforcing rules“. They also find that “faculty are often overwhelmed by enforcement, leading to disengagement“.
Petricini et al. (2025) provide some practical recommendations (all of which teachers need to be trained for!):
I think here it is again very interesting to think about the educational integrity enforcement pyramid and how different approaches might be appropriate in different contexts…
Otto et al. (2025) find high adoption rate and generally positive attitudes towards GenAI in students, but identify three tensions around students’ use of GenAI in PBL projects
I find especially this last point very relevant — it is not only about what a teacher is communicating about GenAI in their specific course, but students have a history of learning with or without GenAI and therefore deeply rooted beliefs around what is and isn’t ok. So we also need to address this!
From discourse analysis with seven teachers, they find eight key themes (haha — 7 teachers, 8 opinions :-D):
This is really helpful, though. I like to ponder how analogies shape our thinking (see also here the analogies in Fox’ (1983) personal theories of teaching)!
But following up on that last point of the digital tutor:
Flenady & Sparrow (2025) argue that “GenAI systems are constitutively epistemically irresponsible” due to their hallucinations, and that “it will be increasingly difficult for students to develop the skills and motivation to hold GenAI outputs to disciplinary standards” when human teachers are replaced with GenAI. Also, it is naive and even “pedagogically perverse” to expect students to take on responsibility for verifying GenAI outputs when they have never had the chance to learn how to.
They tell the story of Plato, who was worried about a new technology of that time — writing. Once something is written down, it becomes much harder to identify who is responsible for it, and for the effects the written word might have in the world. The first part has been addressed in today’s academia by rules around authorship and attribution, but, so Flenady and Sparrow (2025) argue, since we are still so hung up on them, clearly the concern is still valid. And so is the second one about the effects.
“Calling GenAI a ‘collaborator’ disregards accepted understandings of that term: to collaborate is to work together with another agent who is responsible for their contributions, who is capable of holding themselves and others to collective standards governing a shared project” and it “drains actual student collaboration of its meaning and pedagogical significance“. And calling it a tutor “reverses the common assumption that pedagogical responsibilities flow, at least initially, from teacher to student“, with the teacher modelling both responsibility and responsiveness to feedback. “As students, one important way we come to a sense of responsibility to disciplinary standards is through relationships with real representatives of those standards, that is, by addressing ourselves in speech and writing to teachers who serve as models of those standards for us. We come to identify with and invest in a discipline because representatives of that discipline have invested in us in some meaningful way“. In other words, we build a scholarly community.
Love this article! They cite the one I am summarising below, and how can you not want to read something with that title?
Playfoot et al. (2o23) suggest that degree apathy (i.e. starting a degree because there were no alternatives or to avoid having to get a job, without feeling engagement or importance or seeing how it is useful for the future) might be a key risk factor for academic misconduct. Generally, it is the only factor (out of personality facets) that can predict use of GenAI tools; and willingness to use is larger when there is a low risk of getting caught and it wouldn’t be severely punished anyway. I think that’s also a really important consideration — if you don’t care and have nothing to loose, why should you care?
Dahl, A., & Waltzer, T. (2024). A canary alive: What cheating reveals about morality and its development. Human Development, 68(1), 6-25.
Flenady, G., & Sparrow, R. (May 2025): Cut the bullshit: why GenAI systems are neither collaborators nor tutors, Teaching in Higher Education, DOI: 10.1080/13562517.2025.2497263
Gonsalves, C., & Acar, O. A. (2025). Identifying Discourses of Generative AI in Higher Education. In K. Pulk, & R. Koris (Eds.), Generative AI in Higher Education: The Good, the Bad, and the Ugly (pp. 28-44). Edward Elgar. https://doi.org/10.4337/9781035326020.00012
Otto, S., Ejsing-Duun, S., & Lindsay, E. (2025). Disruptive tensions and emerging practices: an exploratory inquiry into student perspectives on generative Artificial Intelligence in a problem-based learning environment. Education and Information Technologies, 1-30.
Petricini, T., Zipf, S., & Wu, C. (2025). AI: Communicating academic honesty: teacher messages and student perceptions about generative AI. Frontiers in Communication, 10, 1544430.
Playfoot, D., Quigley, M., & Thomas, A.G. (2023). Hey ChatGPT, give me a title for a paper about degree apathy and student use of AI for assignment writing. PsyArXiv.vhttps://doi.org/10.31234/osf.io/bxs6m
How cool are those wave rings radiating from where a branch comes out of the water and the reflections of the wind waves bounce away from it?
And so nice to see the sheltered areas in the lee of those lillypads, and how waves still invade them after a bit!
Weather was, let’s say, interesting
And lastly, some duck rings and duck wakes!
Successful wave-watching day and great conversations, thanks, Kirsty!