I’m currently thinking a lot, and talking with a lot of students, about what builds trust between students and teachers: Mostly that teachers ask questions, listen, and respond. But then someone pointed out how students appreciate the “human-like” interactions that students have with ChatGPT, and Rachel sent me a study that also shows that.
In their study, Shoufan (2023) lets students work with ChatGPT, then asks them to report on their experience in their own words, from that compiles different themes into a questionnaire, and then uses that on the students a couple of weeks later, after students have worked with ChatGPT some more in class. They show two examples of student comments in the “human-like conversation” theme: “It feels like having a smart friend which we ask anything and it can answer”, and “[you] can ask follow up questions and it stays in the conversation”.
I can very much relate to those feelings, I use ChatGPT routinely to translate my English posts for our freediving club Active Divers’ posts into Swedish. After initially having written them in Swedish myself and then asked someone to proofread, and then later translating them with ChatGPT and asking people to proofread ChatGPT outputs a couple of times without them noticing anything weird, this is now the much faster solution, where I don’t feel like I am taking up someone else’s time.
But what does it mean when a GAI is really good at trust-building moves, and potentially better than some teachers? Obviously there are other processes at play for trusting teachers and models, but that “human-like conversations” are appreciated when they happen with a model, and we know they don’t happen with all teachers, we should keep an eye on. Maybe not so much in an higher education setting, but generally: Who is answering kids’ questions that they might not want to ask anyone else?
Shoufan, A. (2023). Exploring Students’ Perceptions of CHATGPT: Thematic Analysis and Follow-Up Survey. IEEE Access.