Mirjam Sophia Glessmer

Catching up on reading about GenAI, academic integrity, and trust

Very pleasant working day in Kristianstad yesterday, so today I am doing a sprint, reading stuff about academic misconduct, trust, and GenAI! Here we go with lots of short summaries.

A canary alive: What cheating reveals about morality and its development (Dahl & Waltzer, 2024)

This is really interesting in the context with the fear about students cheating more now that GenAI is available: What does make students cheat? (see also the Adnin et al. (2025) work on undisclosed use of GenAI)

Dahl & Waltzer (2024) describe that people commonly assume that the reasons for cheating are that people either don’t care about honesty and integrity, or that they manage to convince themselves that they have good excuses for it. But they argue that cheating is actually not happening with morality “switched off”, but that it is rather an engagement with morality.

Dahl & Waltzer (2024) find that cheating happens when there are

  1. Misperceptions of what constitutes cheating, for example how dissimilar do texts have to be to not count as plagiarised but rather paraphrased, working collaboratively in different contexts, patch writing, taking notes without also noting down the source and then working with those notes later, … This is especially tricky when transitioning between contexts, e.g. countries. If teachers aren’t very clear about what is and isn’t allowed, “[s]tudents risk either cheating unintentionally or refraining from using permitted resources out of a misguided fear of cheating
  2. Evaluations that cheating or lying is okay under exceptional circumstances, even though students know it is against the rules: “some situations pit concerns with honesty and integrity against concerns with academic success, financial needs, social obligations, and even learning
  3. Prioritization of non-integrity concerns during conflict, mostly when there are really high stakes (like academic survival) and no other solution seems feasible.

I read this study because the next one below cites it (together with another one from 2017 that I did not look into, because it is clearly pre-AI), claiming that “more students will cheat because AI technology makes it easier to do so, even when students do not trust the technology”. But that claim is not actually made in this article!

Communicating academic honesty: teacher messages and student perceptions about generative AI (Petricini et al., 2025)

Many universities, including mine, rely on teachers to create, communicate, and enforce course-specific GenAI policies. Petricini et al. (2025) explore where those policies come from and how the conversations with students happen.

They find that 39% of respondents use rule-based approaches, “adhering to strict institutional policies, often relying on punitive measures, like academic sanctions and AI detection tools to enforce compliance“, 25% use integrity-focussed approaches, which “emphasize fostering students’ ethical decision-making, prioritizing conversations and trust-building over punishment“, and for 20%, a new-to-the-authors, collaborative approach emerged: “chance to engage students in ethical decision-making and course policy development, rather than strictly enforcing rules“. They also find that “faculty are often overwhelmed by enforcement, leading to disengagement“.

Petricini et al. (2025) provide some practical recommendations (all of which teachers need to be trained for!):

  1. Shift from punitive to educational approaches: Clear guidelines that encourage an integrity-based approach rather than enforcing rules
  2. Teachers need to understand GenAI and how to integrate it ethically in coursework
  3. Transparent and fair academic policies! Uniform processes, clear guidelines, explicit guidance for students, and better communication about AI use between teachers
  4. Trust-based, dialogic communication with students instead of accusations

I think here it is again very interesting to think about the educational integrity enforcement pyramid and how different approaches might be appropriate in different contexts…

Disruptive tensions and emerging practices: an exploratory inquiry into student perspectives on generative Artificial Intelligence in a problem-based learning environment. (Otto et al., 2025)

Otto et al. (2025) find high adoption rate and generally positive attitudes towards GenAI in students, but identify three tensions around students’ use of GenAI in PBL projects

  1. Using GenAI can both enable and obstruct learning and working on the project: Students use GenAI to reduce complexity, take over routine tasks, and for many other purposes, but they are also very aware that this might hinder their own learning. However, under the more pressure they feel, the more they use GenAI.
  2. Using GenAI can lead to tensions and distrust between students in a PBL group, when some feel that others use GenAI too much and therefore contribute too little of their own thought and effort, or when there is disagreement on which tasks can be delegated to GenAI. This is where a group contract might be helpful…
  3. Tension with the rules, fear of unintended plagiarism. This might, as Otto et al. (2025) suggest, be also partly due to how universities first reacted to the availability of GenAI: the initial ban might have conveyed both a distrust in students, and led to a very cautious approach towards GenAI.

I find especially this last point very relevant — it is not only about what a teacher is communicating about GenAI in their specific course, but students have a history of learning with or without GenAI and therefore deeply rooted beliefs around what is and isn’t ok. So we also need to address this!

Identifying Discourses of Generative AI in Higher Education (Gonsalves & Acar, 2025)

From discourse analysis with seven teachers, they find eight key themes (haha — 7 teachers, 8 opinions :-D):

  1. AI as a double-edged sword: potential for transformation for good (to personalise teaching and make it more inclusive) or bad (educators “feeling left behind in their own expertise“, both when students pick up the technology much more quickly, and when the impact on pedagogy is very unclear)
  2. AI as the elephant in the room: There is a “reluctance to openly discuss AI, especially in contexts that may impact academic integrity“, both between students, between teachers, and across both groups
  3. AI as an unintended burden: Checking for AI usage in student work increases teachers’ workload, as does having to rethink teaching and assessment methods in the presence of AI
  4. AI as a pedagogical paradigm shift: teaching needs to evolve, away from a focus on the final product and towards a focus on the process, including ethical use of AI tools
  5. AI as a new instrument: Like a new musical instrument, playing GenAI also needs a lot of training and practice, and developing a personal “style” of playing
  6. AI as an escalator: A shortcut for teachers to generating teaching materials faster but with less thinking involved; and for students to skip the hard work of learning the necessary basics themselves. Using a forklift to lift the weights at the gym doesn’t help build muscle…
  7. AI as a digital compass: “generative AI can be likened to a compass which provides direction but doesn’t dictate the journey or the destination
  8. AI as a digital tutor: The belief that AI can provide personalised feedback similar to what a human tutor could do; but it can only complement, not replace the human component

This is really helpful, though. I like to ponder how analogies shape our thinking (see also here the analogies in Fox’ (1983) personal theories of teaching)!

But following up on that last point of the digital tutor:

Cut the bullshit: why GenAI systems are neither collaborators nor tutors (Flenady & Sparrow, 2025)

Flenady & Sparrow (2025) argue that “GenAI systems are constitutively epistemically irresponsible” due to their hallucinations, and that “it will be increasingly difficult for students to develop the skills and motivation to hold GenAI outputs to disciplinary standards” when human teachers are replaced with GenAI. Also, it is naive and even “pedagogically perverse” to expect students to take on responsibility for verifying GenAI outputs when they have never had the chance to learn how to.

They tell the story of Plato, who was worried about a new technology of that time — writing. Once something is written down, it becomes much harder to identify who is responsible for it, and for the effects the written word might have in the world. The first part has been addressed in today’s academia by rules around authorship and attribution, but, so Flenady and Sparrow (2025) argue, since we are still so hung up on them, clearly the concern is still valid. And so is the second one about the effects.

Calling GenAI a ‘collaborator’ disregards accepted understandings of that term: to collaborate is to work together with another agent who is responsible for their contributions, who is capable of holding themselves and others to collective standards governing a shared project” and it “drains actual student collaboration of its meaning and pedagogical significance“. And calling it a tutor “reverses the common assumption that pedagogical responsibilities flow, at least initially, from teacher to student“, with the teacher modelling both responsibility and responsiveness to feedback. “As students, one important way we come to a sense of responsibility to disciplinary standards is through relationships with real representatives of those standards, that is, by addressing ourselves in speech and writing to teachers who serve as models of those standards for us. We come to identify with and invest in a discipline because representatives of that discipline have invested in us in some meaningful way“. In other words, we build a scholarly community.

Love this article! They cite the one I am summarising below, and how can you not want to read something with that title?

Hey ChatGPT, give me a title for a paper about degree apathy and student use of AI for assignment writing (Playfoot et al., 2023)

Playfoot et al. (2o23) suggest that degree apathy (i.e. starting a degree because there were no alternatives or to avoid having to get a job, without feeling engagement or importance or seeing how it is useful for the future) might be a key risk factor for academic misconduct. Generally, it is the only factor (out of personality facets) that can predict use of GenAI tools; and willingness to use is larger when there is a low risk of getting caught and it wouldn’t be severely punished anyway. I think that’s also a really important consideration — if you don’t care and have nothing to loose, why should you care?


Dahl, A., & Waltzer, T. (2024). A canary alive: What cheating reveals about morality and its development. Human Development68(1), 6-25.

Flenady, G., & Sparrow, R. (May 2025): Cut the bullshit: why GenAI systems are neither collaborators nor tutors, Teaching in Higher Education, DOI: 10.1080/13562517.2025.2497263

Gonsalves, C., & Acar, O. A. (2025). Identifying Discourses of Generative AI in Higher Education. In K. Pulk, & R. Koris (Eds.), Generative AI in Higher Education: The Good, the Bad, and the Ugly (pp. 28-44). Edward Elgar. https://doi.org/10.4337/9781035326020.00012

Otto, S., Ejsing-Duun, S., & Lindsay, E. (2025). Disruptive tensions and emerging practices: an exploratory inquiry into student perspectives on generative Artificial Intelligence in a problem-based learning environment. Education and Information Technologies, 1-30.

Petricini, T., Zipf, S., & Wu, C. (2025). AI: Communicating academic honesty: teacher messages and student perceptions about generative AI. Frontiers in Communication10, 1544430.

Playfoot, D., Quigley, M., & Thomas, A.G. (2023). Hey ChatGPT, give me a title for a paper about degree apathy and student use of AI for assignment writing. PsyArXiv.vhttps://doi.org/10.31234/osf.io/bxs6m


How cool are those wave rings radiating from where a branch comes out of the water and the reflections of the wind waves bounce away from it?

And so nice to see the sheltered areas in the lee of those lillypads, and how waves still invade them after a bit!

Weather was, let’s say, interesting

And lastly, some duck rings and duck wakes!

Successful wave-watching day and great conversations, thanks, Kirsty!

Leave a Reply

    Share this post via

    Contact me!

    Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Search "Adventures in Teaching and Oceanography"

    Archives