Mirjam Sophia Glessmer

Thinking about AI resistance

I keep coming back to Karen Costa’s question “What if the critical #AI skill for our era is not how to use it, but how to resist it?” In the Poulidis et al. (2025) chess study, 40% of those who learned with AI and could press a button to get help said that in hypothetical future studies, they would not want to have access to AI since — despite being aware that pressing that button too often hindered their learning — they could not resist the temptation. Is that what it would mean to resist AI? For individuals to choose the option where it is not available at all? What might it mean for teachers and institutions?

The post “AI resistance: Who says no to AI and why?” is a summary of Şimşek and Yasar (2025)’s “From Rejection to Regulation: Mapping the Landscape of AI Resistance“, which is a very broad exploration of the phenomenon across lots of different domains (and I have only read the part about higher education in the real report, the rest is based on the post). In the post, they write that “[t]he concept of “resistance” in the context of AI encompasses a wide spectrum of actions and discourses that may be overt or subtle, organised or diffuse, individual or collective, oppositional or reformist.

They report five main reasons for AI resistance:

  1. socio-economic concerns, like AI taking people’s jobs
  2. ethical issues, since AI systems are opaque and biased
  3. safety risks, when AI gets to make decisions
  4. threats to democracy and sovereignty, “including the use of AI for large-scale societal manipulation
  5. environmental impact

But what is happening in higher education specifically? The report brings up concerns are around academic integrity, negative impact on learning, socio-economic inequality if only some students can afford using AI. Also use of AI on student data can introduce biases. Historically, the first reaction in many higher education institutions was a complete AI band, or ban of AI in assessment, but then there was a shift into exploring AI in education a) to support learning and b) because it is perceived as inevitable that AI is going to be adopted everywhere.

That last point of adopting AI tools in teaching by higher education institutions is described as “FOMO-driven frenzy without consulting their faculty, collecting empirical data on whether generative AI is pedagogically useful, or pausing to inquire about the long-term impact of AI on the students who have been entrusted to their care. Faculty are at best being coerced—and at worst being forced—to employ generative AI in their teaching, assessment, or advising” by Drimmer & Nygren (2025), who suggest four small acts of friction “to pave an exit ramp off the alienating highway of automated education” to “make it hard for universities to charge ahead, pouring resources into a technology that none of us asked for“:

  1. Centering students in our teaching. Not AI, not preventing cheat with AI, just students. They write that “[d]efensive maneuvers, like in-class essay writing exclusively, are acts of deprivation. They deprive students of the opportunity to reflect and refine away from the pressures of the classroom clock.” Instead, we should keep writing assignments and not police how they were generated, but rather make them thought-provoking and give thoughtful feedback on the thoughts thoughts reported in them
  2. Don’t optimize everything. Make space for things that are not on the curriculum and that are not credited: “Reading groups, lightning round presentations, unambitious programs of being in simple, un-CV-able conversations” to focus back on humanity and community, and make space for skepticism
  3. Take some things offline again: Share printouts, use leaflets, don’t make everything be online all the time (but — me thinks — do that in a way that doesn’t roll back all efforts on inclusion that have been made!)
  4. Ask questions — in person! — about what problem we are trying to fix and how we are trying to fix it

In that post, they recommend the website “AGAINST AI” which I have randomly browsed to find a treasure trove of stuff:

  • Assignment ideas like remaking parts of films that were discussed in (most likely a film?) class, showing similar framing, pacing, blocking, lighting, image/sound relationships, camera position and movement. So rather than writing about what students observe for all those facets, they need to apply it. Sounds like fun! Or, in another suggestion, annotating reading by hand which I like (except the suggestion to submit photographic evidence of that, which falls a bit too much into policing for me). Or a book club as pedagogical tool.
  • Teachable readings, a collection of links to texts that critically explore different facets of AI tools, for example how they relate to cognition, the environment, oligarch agenda, theft (in the sense of intellectual property), and more
  • Memes!
  • Syllabus language to explain why AI is not to be used in a class and why the teacher won’t use it themselves

Definitely worth checking out!

Last thing I am reading on this today: a post by Rosen (2025) on “5 Strategies for AI-Resistant Assignments“. I am not super happy about the framing, since it is centering AI rather than the student or learning, but the suggested strategies seem to make the assignment themselves better, so the side effect of AI-resistance is not even the most important thing. In general, the suggestions also boil down to slowing down and making the process the point, not the product.

Overall, I think the above suggestions of introducing friction are really helpful. Not necessarily to get rid of AI completely, but to slow down AI adoption enough to be able to proceed with care and caution, keeping open the option to use AI where it makes sense for learning (if and when we have sufficient reason to believe that it does, and if and when we have weighed that against other concerns), but to not be rolled over by a wave that is driven by other interests, or just momentum. Adoption of AI in education is not a binary choice but a wicked problem, where we have to act on the information we have available, knowing that it is incomplete, and where we have to iteratively revisit and readjust choices we make.


Şimşek and Yasar (2025). From Rejection to Regulation: Mapping the Landscape of AI Resistance. Available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5287068


Not every dip is a sunny dip…

But I love being in the water and watching the waves!

Or just watching the water even when there are no waves. And no horizon, for that matter…

How else would I find calm?

Cold water is really the best!

I didn’t even realize someone had walked into my picture!

Leave a Reply

    Share this post via

    Contact me!

    Adventures in Oceanography and Teaching © 2013-2026 by Mirjam Sophia Glessmer is licensed under CC BY-NC-ND 4.0

    Search "Adventures in Teaching and Oceanography"

    Archives