Mirjam Sophia Glessmer

Reading on cheating and grading (Henslee et al., 2025; and Swanson et al., 2025)

I have such a backlog of wave watching pics that I really need to blog more… Two articles today, the first one on “Students’ Perceptions of Self and Peers Predict Self-Reports of Cheating” by Henslee et al. (2025). I really enjoyed reading about Ellis & Murdoch (2024)’s educational integrity enforcement pyramide recently, but that’s a theoretical, abstract framework. Henslee et al. (2025) now investigate what students themselves think constitutes cheating, how they perceive themselves and their peers in terms of academic integrity, and what influences whether they report cheating. For this, they have a dataset of 703 first-year students reporting in an online survey on this, plus on their knowledge of consequences of cheating, and whether they had cheated themselves.

Looking at the questions, I am wondering whether it is possible to interpret the response, because for example for “How common do you think cheating (CC) is at a university like Missouri University of Science and Technology? (0 = not common, never happens; 100 = extremely common, everyone cheats)”, would 100 mean that everyone cheats once in their time at that university, or everybody cheats at every opportunity they get? And as the authors discuss in the limitation section of their paper, are students maybe even thinking back to highschool for their assessment?

But ignoring that, after a lot of calculations, they come to interesting conclusions:

  • The more students think that cheating is common amongst their peers, the more likely they are to cheat themselves.
  • Students that report having high ethics are less likely to report having cheated themselves (even when they think their peers cheat)
  • The more students know about the consequences of cheating, the less likely they are to cheat themselves (at least as long as they think it is generally uncommon to cheat)

Henslee et al. (2025) discuss that self-reported cheating is a good predictor of future cheating (and that apparently engineering students are the ones who cheat the most!), so asking about cheating early on might be useful to assess the needs for preventive interventions or even just clear information and guidelines (as the lowest intervention level of the educational integrity enforcement pyramide also suggests). The authors also discuss that cheating is less likely to be repeated when it causes negative emotions or consequences (like guilt, or getting failed for cheating). Therefore the authors recommend that misconduct should have consequences for students, and that also their emotional reactions should be discussed and how that relates to how they see themselves. And those discussions might also surface the underlying reasons for cheating, which might then be directly addressed.

Another interesting suggestion is to use data from such surveys to inform students about how (un)common cheating actually is, thus possibly addressing the peer effect that when everybody else seems to be cheating, it cannot be so bad.

In a nutshell: really interesting article, and as always: talk to your students!


Next article today: Swanson et al. (2025) on “Mastery-Based Testing in Linear Algebra: An Entry Point to Alternative Grading”. I’ve been interested in assessment for inclusion and distinctiveness recently, and here this fits nicely with a soft entry to alternative grading. Summative, high-stakes exam at the end of the course are problematic for many reasons: in our own study we show that they cause higher test anxiety, but the literature summarised in this article mentions many more problems, for example that if all the assessment is done individually at the end of the course, there is no real reason to collaborate throughout the course; cheating becomes more likely because it is so high-stakes and there is only the one attempt; there is no real feedback because the grade comes at the very end of the learning process instead of being integrated; etc.. As an alternative, Swanson et al. (2025) explore an alternative setup, where they keep the university-prescribed grading scale (instead of moving to pass-fail, like many alternative grading methods do) but instead of in a final exam, learning can be demonstrated throughout the semester. There were several exam days where learning over a specific period could be demonstrated (parts of which could also have already been demonstrated earlier during that period), and on a retest day and an exam day, all learning could be demonstrated if it hadn’t been before. On each attempt, students could receive either an “M” (for mastery of the learning outcome), “MC” (when there are still smaller problems — through resubmission and improvements based on the feedback given, this can still be improved to a full “M”), or “P” (progressing — students are expected to test on this again, and can then move up the ladder). The final course grade is then a weighted average of course components (homework, exams, …), and the “exam” part of the grade depends on how many learning outcomes a student could demonstrate mastery of before the end of the semester.

In their study on 120 STEM students in linear algebra, Swanson et al. (2025) find that their implementation of mastery grading reduced test anxiety, but might have led to procrastination as students saw that they had the chance to repeat assessments and therefore did not have the stress (and incentive) to study for the first time an assessment is offered. This can then lead to a pile-up of assessment tasks, which, especially in comparison with peers who were already done with those tasks, did cause some stress for some students. Students who did work with the system however emphasised that it helped them focus on the learning process over grades. There was some concern in students beforehand that ticking off parts of the assessment early might lead them to forget it right away, but this was found not to be a problem since the course content continually builds on itself and therefore there is a built-in repetition of content anyway.

A common complaint that students had was that there was no partial credit, no easy points, just reattempts until mastery could be shown, but at the same time they acknowledged that this actually, in a good way, forced them to learn. As the authors say, this is “a feature, rather than a bug” in the system, as it encourages good learning habits like spaced repetition and retrieval practice. So this sounds like an interesting system definitely considering more in the future!


And now to the wave watching you’ve all been waiting for: In the featured image at the top of this post, you see a beautiful Kelvin Helmholtz instability — the breaking waves in the cloud at the bottom of the picture. Saw that and texted Kirsty to look out of the bus towards Denmark and she would see it! Later, on Långabryggan, there were still sheer instabilities in the clouds, but not such very nice and distinct ones.

But at least some Langmuir circulation in the water!


Henslee, A. M., Settles, L., Johnson, S. E., & Olbricht, G. R. (2025). Students’ Perceptions of Self and Peers Predict Self-Reports of Cheating. Teaching and Learning Inquiry13, 1-19. https://doi.org/10.20343/teachlearninqu.13.16

Swanson, R., Bingham, A., Sanders, M., & Moulton, C. (2025). Mastery-Based Testing in Linear Algebra: An Entry Point to Alternative Grading. Teaching and Learning Inquiry13, 1-20. https://doi.org/10.20343/teachlearninqu.13.10

Leave a Reply

    Share this post via

    Contact me!

    Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Search "Adventures in Teaching and Oceanography"

    Archives