Mirjam Sophia Glessmer

Currently reading Morris & Maes (2026) on “Same Feedback, Different Source: How AI vs. Human Feedback Shapes Learner Engagement”

There is a “study out of MIT” that somehow absolutely dominated my LinkedIn feed a couple of weeks ago (the source really typically given as “study out of MIT” or very similar, so I had to do some digging to find it! But maybe the clue should have been in the missing citation already…). Anyway, I found it, and summarize it below, but before want to point out that — like many of the AI stuff I read these days — it is a pre-print! That means that it is, at this stage, a pdf that has been uploaded to a server to be archived there (presumably while being submitted to a peer-reviewed journal). It is not peer-reviewed yet, and additionally, the authors refer to their study as a “pilot”.

So. In a preprint on arXiv, Morris & Maes (2026) investigate “Same Feedback, Different Source: How AI vs. Human Feedback Shapes Learner Engagement” and find that it is important what students believe about who is providing feedback to them, a human or AI. So far so good.

The first thing I noticed reading, though, is that engagement with literature is really weak. The number of citations is of course not an absolute measure of anything, but 11 references on an AI topic about teaching and trust and relationships? I very quickly noticed missing mentions of articles (e.g. Nazaretsky et al., 2026). To be fair, this is a very fast moving research field, but this paper was archived in February this year and I wrote the summary of that other article in December last year, so it was possible to be aware of it before submission.

In the study, the “students” are 25 volunteers between 20 and 77 years old, of which 8 report using AI often and 4 frequently. This means that a) any kind of statistical analyses (that are already given in the abstract) are highly questionable, and it also means that b) the sample is not representative of a student group that I would expect to meet in my context. In what kind of context would we expect such a group? Can we really learn anything from this sample for any context?

In the study, they use an online interface in which students  are supposed to learn coding, and where they receive feedback on both the coding itself and creative elements like graphic design, and based on AI output. But half the students are told that the feedback comes from a human teaching assistant, while the other half knows that feedback they receive is AI output (side note: I am trying really hard to change the way I speak and write about AI in a de-anthropomorphized way, but it is difficult!). To make it more believable that some of the feedback is provided by a human, there is a time delay built in where students see messages like “your TA is online” or “your TA is reviewing your code”. Students then rate the feedback they get based on perceived helpfulness, and then work on improving their code based on the feedback. How they engaged with feedback was captured by measuring the time spent, amount of code created, and how much it differed from the initial code.

Morris & Maes (2026) find that when students think that feedback comes from a teaching assistant, they spend a lot more time (about twice as much!) and effort (more iterations) working on improving their code based on the feedback than if they think that the feedback is AI generated. Interestingly, the preceived helpfulness of feedback did not matter: “high ratings in the AI condition did not translate to greater effort: believing feedback is helpful is not the same as being motivated to engage deeply with it“. Morris & Maes (2026) conclude that “As AI feedback tools become common in education, understanding when and why learners engage differently with identical content based on believed source has implications for how we design, deploy, and frame these systems.”

This article is, of course, investigating a really relevant question, and in an interesting setup (basically the flipped approach to what Most & Clout (2026) did, who used a human-written text but pretended to half the participants that it was AI).

There are several reasons why I am upset with seeing referred to this article everywhere, and they mostly aren’t even about the article itself (which, as pointed out above, is actually a pre-print).

The first one is about how people use it on my LinkedIn feed and absolutely overstate what it says and what that means for the future of learning. That is, of course, completely out of the authors’ control. But I find it upsetting how uncritically people are when reading and spreading their take-aways from what they have read (and maybe this is where I shouldn’t even have assumed that people read it themselves, maybe those were all AI summaries. Who knows?).

Another treason is that I am just generally upset with how fast everything is developing and that it is impossible to catch up. We need to make decisions now on how we relate to AI in our teaching because our students need guidance from us now, not in five years when we have had time to figure it out. At the same time, we really just don’t know yet. So we jump on research that isn’t even peer-reviewed yet for answers that it cannot give us, because we need to make decisions even though we don’t have sufficient information yet because AI poses a wicked problem and there are just no solutions, only steps that are more or less in the right direction based on what we know at any given time.

And I am upset that I am writing a blog post where I am mostly whining about the world rather than offering some kind of hopeful perspective or positive spin or idea forward. There is so much noise in the world, is what I am writing here signal or more noise? I have been sitting on this prepring and a draft post for weeks, because I really very much prefer writing my usual THIS IS AWESOME!!!1!1!!-type posts. That is how I want to show up in the world, and also what I want authors to read when they see I have discussed their articles. And again, I don’t think the problem here are the authors or the study. The problem is what I describe above: That people take a preprint which describes a pilot study and interpret it in ways that it really cannot be interpreted, share summaries of the results with much more certainty than they offer, and use it as conclusive guidance when it should offer really more of a general idea of a direction at most. It’s exhausting to live with uncertainty, and even more so to live with uncertainty AND to constantly remind people that they have to actually live with the uncertainty and not just jump to conclusions…


Morris, C., & Maes, P. (2026). Same Feedback, Different Source: How AI vs. Human Feedback Shapes Learner Engagement. arXiv preprint arXiv:2602.11311.


Featured image and pics below from and after-work dip that I just HAD to do in this weather! Cold water is the solution to almost everything, including forgetting that I am upset about AI stuff…

 

 

 

 

 

 

 

 

Leave a Reply

    Share this post via

    Contact me!

    Adventures in Oceanography and Teaching © 2013-2026 by Mirjam Sophia Glessmer is licensed under CC BY-NC-ND 4.0

    Search "Adventures in Teaching and Oceanography"

    Archives