
Usually we like to think that self-regulation in feedback seeking and learning is a good thing: When students get stuck, they can ask for support that helps them overcome the difficulty and continue learning. This can become problematic, though, when students ask for hints too early and often, get used to that behaviour, and then never struggle with difficulties in a productive way that actually contributes to learning.
Poulidis et al. (2025) use a 12-week home online chess training provided to 216 members of chess clubs (with at least a year of training, and about half of them 18 or younger). Participants were invited to participate and reminded to go practice on that platform by their coaches, and there were financial incentives in place (10$ as base incentive and bonuses up to 150$).
On the platform, students were assigned two conditions: Either they received automated tips in “critical moments” but couldn’t ask for help, or they received the same kind of tips but also could request additional help by clicking a button. And clicking the button they did! And that had consequences on learning: Perfomance gain in the self-regulated group were a lot lower than in the other group. In the group where students could not ask for hints but received targeted feedback at points in the process determined by the algorithm, students had to go through productive struggle which contributed to their learning, whereas in the other group, based on survey results, “students knowingly over-relied on AI assistance, thereby diminishing their sense of accomplishment. Despite recognizing these drawbacks, they not only continued to rely on it but did so increasingly over time“. The size of that effect is moderated by student motivation (but not skill!): “motivation moderates the learning losses induced by self-regulated AI use, with more motivated students experiencing substantially smaller learning losses“. Students in the self-regulated group also played 24% fewer training games than students in the other group. When asked for a preferred training type for hypothetical future training, the largest group of self-regulated students (40%) picked “no tips”!
But Poulidis et al. (2025) also compared learning gains against students who had not been part of the study and thus did not have access to the platform at all, and overall, students learned more on the platform, no matter the experimental condition they were under. So you can learn from AI! However, “[g]iving students control over when to receive assistance can substantially hinder learning; effective AI tutors should therefore target help to moments when it best supports learning rather than providing assistance on demand“. Otherwise it is easy that students fall in the “agency trap: when highly accurate solutions are easily accessible, even students who genuinely want to learn over-rely on AI assistance.”
Poulidis et al. (2025) conclude that “students recognized that overuse would harm their learning yet still relied heavily on AI assistance—awareness alone cannot prevent misuse“. Emphasis in that quote is mine because I find it to be so important: even though participants were motivated to learn and knew that by clicking the button, they were harming their own learning, they still could not resist the temptation, and 40% therefore (or at least that’s my interpretation) would wish to not even be led into temptation in hypothetical future training by completely removing the option to get feedback. Poulidis et al. (2025) close by writing “[a]s self-regulated AI use becomes increasingly ubiquitous in education and the workplace, preventing harm to long-term learning and the atrophy of human capabilities is a central design challenge“.
This is a really interesting article with super relevant results. Using a chess club to find people who are really motivated to learn and improve (as they have shown by showing up regularly for a year already) is such a smart choice! Of course results are not neccessarily transferrable to other settings, but it seems likely that what they find — it is really difficult to resist an apparent AI shortcut, even if people know it is going to backfire on them — will work similarly in situations where AI can so clearly give a correct answer that results in clearly marked “wins” (a win does not get much clearer than winning a game of chess). If learning wasn’t as gamified as playing an actual game, and if AI did not provide a clear correct answer but rather things to consider, maybe it would contribute more to learning and function more like a thought that a peer might bring up that the learner would still have to work with themselves before accepting it as valid? Who knows, but I am sure there are going to be more cool studies soon!
P.S.: One thing that bugs me about this article, and enough to mention it here: They use the concept of the “zone of proximal development” (ZPD) as the zone where learners cannot learn by themselves but they can learn if they get some kind of help quite a lot, and they cite two sources for it: Vygotsgy’s original text (where the ZPD is defined as “the distance between the actual developmental level as determined by independent problem solving, and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers“, and in my reading the social interactions are really important in this; he later writes that “human learning presupposes a specific social nature and a process by which children grow into the intellectual life of those around them“), and Bjork & Bjork (2011) (which I then read and found to be a really helpful chapter, but which does not mention ZPD or Vygotsgy at all). So maybe calling the struggle zone by a different name than ZPD might have been an idea…
Poulidis, S., Bastani, H., and Bastani, O. (October 01, 2025). Self-Regulated AI Use Hinders Long-Term Learning. The Wharton School Research Paper , Available at SSRN: https://ssrn.com/abstract=5604932 or http://dx.doi.org/10.2139/ssrn.5604932
Featured image from a recent dip, images below from the walk before…
…my favourite spot! …
…in the water! I need to be more consistent with bringing my camera into the water with me when I am dipping, even though the pictures might look exactly the same to everybody else, they don’t to me! Note how the colors are this really intense blue in the foreground and a bit more washed out a bit further away? The foreground is in the shade, and where it is more washed out it’s direct sunlight! …
…little capillary waves make me so happy! …
…it almost feels like summer! …
…as long as you only look at the water and the beach, not the bushes…
…the end!