One-on-one tutored students perform 2 standard deviations better than students who learn via traditional instruction. Can LLMs achieve similar results when acting as tutors? Typically, help-seeking behaviour is influenced for example by self-efficacy and task difficulty, availability of tools and their perceived usefulness, and possibly trying to balance independence and learning with reliance on a tool. But how does that work when seeking help from LLMs? Kumar et al. (2024) investigate how students interact with LLMs, and how that translates into learning.
To do that, Kumar et al. (2024) look at the guidance received (what the LLM tells the student to do in 4 different strategies: list-based suggestions, example-based interaction, making learners think about their learner approach before using LLMs, and making learners attempt to solve a problem first before offering more guidance), the learner approach (do students first try themselves or do they turn to a LLM right away), and the LLMs’ response (quality of the dialogue).
They find that in the absence of explicit guidance from the LLM, students ask more unrelated questions and in “attempts to break the chatbot” (which I know I have also tried, repeatedly, when Rachel shared different chatbots with me to test!). Kumar et al. (2024) explain this as follows: “When students are presented with examples, they might test the system’s capabilities beyond the task’s scope, reflecting a curiosity-driven exploration“. Overall, students became (slightly) more confident in LLMs as learning tools. In conclusion, LLMs can do all kinds of stuff and it all depends on context: it’s complicated.
Kumar, H., Musabirov, I., Reza, M., Shi, J., Wang, X., Williams, J. J., … & Liut, M. (2024). Guiding Students in Using LLMs in Supported Learning Environments: Effects on Interaction Dynamics, Learner Performance, Confidence, and Trust. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW2), 1-30.
And some wave watching pics from a recent dip! :-)