
The other day I wrote about a paper on “sycophantic AI” and its implications on human interactions, and I am reminded of that daily when I hear kids on the bus mention how they talk to “chattis” (which seems to be a common nickname for ChatGPT around here) about all kinds of topics (which is scary, especially having read that “sycophantic AI” paper). But even in professional contexts, we (me definitely very much included) talk about AI as if it was some kind of person sitting in a box. Inie et al. (2026) investigate the type of language people use to write about AI in scientific texts, news articles, and company blog posts.
They create a taxonomy with eight categories of anthropomorphization:
Inie et al. (2026) report that anthropomorphization is described in many studies as a deliberate strategy to increase user trust in “probabilistic automation systems” and that “A recent article found that anthropomorphism in the senseof how human-like an “AI-agent” appears is actually a greater predictor of acceptance and adoption of the technology than trust (Gefen, et al., 2025)” (and I did not follow the reference yet). Inie et al. (2026) describe anthropomorphization as risky in four different ways:
So what can we do? Inie et al. (2026) provide some suggestions for de-anthropomorphized language, organized in the categories they presented above:
Is it really worth the effort to try and change how we (and everybody else) talks about AI, especially when there are established terms (like “AI” itself)? Inie et al. (2026) argue that yes, it is, even in scientific articles, since “a better term will make the actual claims of the paper more realistic and clearer“. And I am (as most of the time) thinking about learning and teaching and academic development — I think changing language is super relevant here, too, to mitigate the risks Inie et al. (2026) describe (see above). And after all, teaching is about telling people what they don’t know to ask for yet (that’s my interpretation of Biesta)!
That said, I will also start talking about “AI products” instead of “AI tools”, to make the point that those are products that we are being sold with the purpose of making a company profits, not tools that we are given for the best of humanity (or the world in general).
Inie, N., Zukerman, P., & Bender, E. M. (2026). De-anthropomorphizing “AI”: From wishful mnemonics to accurate nomenclature. First Monday.
Featured image: Same motif, different perspective as in this post