
This is an article that I find interesting both because of the method (searching reddit for accounts of student experiences as data sources!) and because of the findings, which I’ll summarise below.
Since I am interested in what the existence of GenAI does to relationships between teachers and students, I am mostly interested in a section that explicitly deals with that (although the general reactions are definitely interesting, too!). Gorichanaz (2023) finds that there are different kinds of trust that are relevant. Trust between teachers and students is relevant both with the teacher having developed trust in students and thus giving them the benefit of the doubt (although being accused of cheating still damages that relationship), but also with the students feeling betrayed by a teacher that they trust when that teacher accuses them of cheating. After such an experience, and being aware of the damaged trust, students act a lot more cautiously to make sure they have proof (like screen recordings) in case they are accused of cheating again. But another trust relationship is between the teachers and AI detectors, where the students perceive the teachers as trusting the technology more than them (despite AI detectors being useless, and despite teachers even using ChatGPT as AI detector, which it can clearly not be). Also relationships between students are influenced when they, for example as a group working together on a project, get accused of using GenAI, but individuals feel that they did not do anything wrong, so it must be the others that aren’t trustworthy.
Another relationship that suffers (even though not so explicitly discussed in the article) is the trust in the institution (since, as Curzon-Hobson (2002) or Lewicka (2022) describe in detail, trust in teachers is not independent of what happens in the background context…). When students are accused of cheating with GenAI, as decribed in the section “legalistic stance”, students do not trust the process to find them innocent, but share strategies to deal with the process, whether “deny, deny, deny”, or “escalate”. In another article, interestingly also based on reddit posts, Wu et al. (2024) find that some teachers seem to think that “students accused of using ChatGPT to cheat are contesting the accusations with surprisingly sophisticated arguments (even if the arguments don’t make sense in context). It really feels like they are reading off a list of “Best Practices for Getting out of a ChatGPT Cheating Accusation.” Sounds almost like a conspiracy theory, but on the other hand it is quite likely that students are well-organised when faced with the threat of being wrongly accused of cheating…
So yeah, cheating allegations are harmful for trust all around. But then I also don’t have a good idea for how to deal with suspected cheating in a less harmful way, other than falling back on the educational integrity reinfocement pyramid and heavily investing in the foundation of making it very clear of what the rules are and why… And also investing in building good relationships with students, but that’s always a good idea anyway.
Gorichanaz, Tim (2023) Accused: How students respond to allegations of using ChatGPT on assessments, Learning: Research and Practice, 9:2, 183-196, DOI:10.1080/23735082.2023.2254787
Wu, C., Wang, X., Carroll, J., & Rajtmajer, S. (2024, May). Reacting to generative AI: Insights from student and faculty discussions on Reddit. In Proceedings of the 16th ACM Web Science Conference (pp. 103-113).
Featured image is a fountain in Helsingborg from a recent visit at LU’s Campus Helsingborg