Mirjam Sophia Glessmer

More reading about GenAI

My notes on GenAI stuff are getting shorter and shorter, and that’s because I am reading the articles with a specific application in mind, and only writing down what seems relevant to answer that question. So don’t use these notes-to-self as summaries that tell you all you need to know about those articles, that’s not what these are meant for!

First, Gerlich (2024) is “exploring motivators for trust in the dichotomy of human—AI trust dynamics” and finds that there are basically two ways people relate to AI vs other people: Either, they trust AI because it is perceived as impartial and accurate (like normal computers), and people are perceived as self-interested and dishonest (and especially a distrust in the media), or, as is the case for another group of people, they are very confident in human judgement but distrust AI.

What I find really concerning is that Gerlich (2024) finds that “[p]articipants expressed a strong belief that technology operates without self-interest—unlike humans, who may be biased and therefore less trustworthy“. And “[t]his study also highlights a significant shift among the population towards technology as a more trustworthy alternative, given their disenchantment with traditional human-driven information sources, except for close personal relationships like friends and family. AI is favoured for its ability to circumvent human arbitrariness and self-centeredness, offering fair and impartial decision making“. How scary is that?

Concerns that people voice are mostly about job security and a little around processing of data, but not on what those models are (not) trained to do, what interests are behind them, the benevolence of their creators, etc.. So while I think this is way too narrow, Gerlich (2024) writes that “concerns about job security and data privacy associated with AI adoption also call for robust ethical frameworks and regulatory measures to ensure that the integration of AI technologies protects individuals’ interests and promotes a fair and equitable society“.

In today’s second article, Gerlich (2025) investigates “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking”. They find that concerns around cognitive offloading are valid, and that “while AI can be a valuable tool for enhancing certain aspects of learning, it is crucial to maintain a balanced approach that promotes cognitive engagement and critical analysis“. They refer to the classic works of Deslauriers and Freeman for suggestions for active engagement and suggest that “educators and policymakers should promote balanced AI integration in educational settings, ensuring that AI tools complement rather than replace cognitive tasks“.

One way to use AI tools for learning (rather than answering) is investigated by Yang et al. (2025) in “Asking questions matters: comparing the effect of learning through dialogue with peers, ChatGPT, and intelligent tutoring system“. Their study is motivated by the lack of guidance and evidence for how to use AI for learning, which leads to students adopting AI in unsystematic ways. They compare three conditions: Discussions with peers, discussions with AI, and working in a system where students only had to answer questions. They find that “The Peer and GPT groups, where students could ask questions, demonstrated significantly higher post-dialogue performance in both absolute score and conditional correct rate compared to the ITS group, which only answered questions. The number of questions asked and the level of trust in the dialogue were positively correlated with individual differences in performance. Our findings highlight the importance of asking questions over asking many when learning through dialogue, laying the groundwork for designing more effective interactive learning systems.

Last article for now, Peter et al. (2025) on “The benefits and dangers of anthropomorphic conversational agents”. Interacting with AI can, as we saw above, be very beneficial for learning, similar to talking with a peer. But “[t]he development of anthropomorphic agents comes with the promise to make computing accessible in ways not seen before, by enabling interaction with computers as if with a fellow human. This promise needs to be weighed against the obvious danger that any such impersonation of human likeness also opens the door for highly effective manipulation at scale.” So they ask “should we lean into the human-like abilities, or should we aim to dehumanize LLM-based systems, given concerns over anthropomorphic seduction? When users cannot tell the difference between human interlocutors and AI systems, threats emerge of deception, manipulation, and disinformation at scale“. They ask this not only in the context of AI as a learning partner, but also looking at “companion apps“, “systems that heavily optimize for anthropomorphic quality raise serious questions regarding responsibilities given issues associated with anthropomorphic seduction“. They conclude, not surprisingly, that it of course depends on context and that a “high degree of humanness” can be very helpful for role-playing in trainings, for example in service jobs or medicine: “In such contexts it would be made clear that interaction takes place with a nonhuman entity, while the interaction itself would benefit from high levels of human-like communicative ability and natural conversation flow.” In their examples for beneficial cases, they also include “as tutors and coaches in education“, but there I would like to stress that we should be careful when we think about AI as tutors, since there cannot be an actual relationship between tutor and student then, no scholarly community to socialise the student into.


Gerlich, M. (2024). Exploring motivators for trust in the dichotomy of human—AI trust dynamics. Social Sciences13(5), 251.

Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies15(1), 6.

Peter, S., Riemer, K., & West, J. D. (2025). The benefits and dangers of anthropomorphic conversational agents. Proceedings of the National Academy of Sciences122(22), e2415898122.

Yang, J. W., Choe, K., Lim, J., & Park, J. (2025). Asking questions matters: comparing the effect of learning through dialogue with peers, ChatGPT, and intelligent tutoring system. Interactive Learning Environments, 1-17.

Leave a Reply

    Share this post via

    Contact me!

    Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Search "Adventures in Teaching and Oceanography"

    Archives