The title of Goldshtein et al. (2025)’s editorial, “The role of learner trust in generative artificially intelligent learning environments“, sounded super intriguing. The question of trust in GenAI is so relevant; do students trust GenAI, and to do what, and why? And what does that do to their relationships with their teachers?
In the editorial, Goldshtein et al. (2025) postulate that “Engineering educators should facilitate students developing a critical lens toward generative AI, rather than unilaterally accepting AI outputs in various tasks without first validating those outputs“, which I very much agree with. They advise that “Engineering educators should also avoid promoting the surprising capabilities of AI performance, especially in comparison to human performance“, so that students don’t develop unwarranted trust in the tools. In earlier studies done in the same research group, the authors found that trust in a technical tool is influenced for example by the quality of a voice of a virtual character, comparing human speech with different text-to-speech tools, even though the type of voice did not influence learning outcomes. Also technical systems are perceived as more trustworthy when they are responsive.
What I find also interesting is that the authors write that they can distinguish between trust and credibility (the latter meaning, according to the Oxford Dictionary, “the quality of being trusted and believed in“) and that the two are independent of each other, although I do not understand how they are defined or measured. In Matthews et al. (2020), which is cited in Goldshtein et al. (2025) in relation to credibility, they distinguish between two different mental models, which conceptualise technical systems as either “tools” or “teammates”. And it might be the teammate model, and an over-trust in its capabilities, that teachers would trigger in students if they promoted the surprising capabilities of AI performance?
Yet, Goldshtein et al. (2025) call for more research: “As generative AI use rises in educational settings, particularly in engineering education, we see a dire need for research to understand how we can design and implement these tools in ways that support relational trust with students, teachers, and other involved groups. Transdisciplinary work combining insights and practices from researchers of education, trust, human-computer interaction, computer science, and engineering will be critical for effectively using generative AI in engineering education with accessibility and equity in mind, while addressing various student and workforce needs“. And this rubs me the wrong way somehow. Of course it is important to understand what makes people trust those tools. But surely especially to make sure that they are designed in a way that they induce neither over- nor under-trust? Shouldn’t we then first figure out how much trust in these tools is actually warranted, and try to communicate that adequately, before designing them in a way that supports trust?
Featured image: The fountain outside my office
Goldshtein, M., Schroeder, N. L., & Chiou, E. K. (2025). The role of learner trust in generative artificially intelligent learning environments. Journal of Engineering Education, 114(2), e70000.
Matthews, G., Lin, J., Panganiban, A. R., & Long, M. D. (2020). Individual differences in Trust in Autonomous Robots: Implications for trans-parency. IEEE Transactions on Human-Machine Systems, 50(3), 234–244. https://doi.org/10.1109/THMS.2019.2947592