The other day I wrote about a paper on “sycophantic AI” and its implications on human interactions, and I am reminded of that daily when I hear kids on the bus mention how they talk to “chattis” (which seems to be a common nickname for ChatGPT around here) about all kinds of topics (which is scary, especially having read that “sycophantic AI” paper). But even in professional contexts, we (me definitely very much included) talk about AI as if it was some kind of person sitting in a box. Inie et al. (2026) investigate the type of language people use to write about AI in scientific texts, news articles, and company blog posts.
They create a taxonomy with eight categories of anthropomorphization:
- Cognizer (when the system is portraied as having cognition, for example through descriptions like intelligent, or as doing activities like learning or believing.
- Products of cognition (for example skills, capabilities, and being biased can only happen if there was cognition first [but reflecting bias is different from being biased!])
- Emotion (e.g. struggling, caring)
- Communication (e.g. answering, following instructions, explaining, suggesting)
- Agent (helping, facilitating, leveraging)
- Human role analogy (e.g. tutor, assistant)
- Names and pronouns (in Sweden they often talk about talking with “chattis”! And the list of pronouns the authors present here don’t include “it”, but maybe it should be included?)
- Biological metaphors (e.g. seeing, consuming information)
Inie et al. (2026) report that anthropomorphization is described in many studies as a deliberate strategy to increase user trust in “probabilistic automation systems” and that “A recent article found that anthropomorphism in the senseof how human-like an “AI-agent” appears is actually a greater predictor of acceptance and adoption of the technology than trust (Gefen, et al., 2025)” (and I did not follow the reference yet). Inie et al. (2026) describe anthropomorphization as risky in four different ways:
- risk of misplaced trust and over-reliance (i.e. using AI for tasks it isn’t designed for, like ethical or health advice)
- risk of spillover effects of overestimation (e.g. since AI produces good texts, it can also give health advice)
- risks related to accountability (whose fault is it if someone follows bad advice from an AI?)
- risk of disproportionate impact to different populations (are elderly people going to be more easily tricked by AI voices designed to trick them into believing their grandkids need money? Angsty teenagers seeking advice from AI rather than from humans?)
So what can we do? Inie et al. (2026) provide some suggestions for de-anthropomorphized language, organized in the categories they presented above:
- Cognizer & products of cognition: Make it about an algorithm performing operations and people doing the thinking, e.g. “artificial intelligence -> probabilistic automation“, “image recognition -> image labeling“, “speech recognition -> automatic transcription“, “the model shows bias -> the model reflects bias“, “model mistakes -> model errors“, “chatbots are good at … -> chatbots are good for …“
- Emotion: Just don’t do it! :-)
- Communication: e.g. “prompt -> text input“, “answer -> output“
- Agent: “Turns of phrase that locate agency with a machine often serve to obfuscate the interests and goals of people. We suggest revising to locate agency with people or choosing less agentive verbs“, e.g. “ChatGPT assisted students -> the students used ChatGPT“
- Human role analogy: Algorithms are tools that people use; calling them human role analogies are overclaims “that describe what a developer might wish they could develop — for those who want to replace people in these roles“
- Names and pronouns: e.g., “who’s right? -> is the machine output correct?“
- Biological metaphors, e.g., “the model consumes data -> data is used in setting model weights“
Is it really worth the effort to try and change how we (and everybody else) talks about AI, especially when there are established terms (like “AI” itself)? Inie et al. (2026) argue that yes, it is, even in scientific articles, since “a better term will make the actual claims of the paper more realistic and clearer“. And I am (as most of the time) thinking about learning and teaching and academic development — I think changing language is super relevant here, too, to mitigate the risks Inie et al. (2026) describe (see above). And after all, teaching is about telling people what they don’t know to ask for yet (that’s my interpretation of Biesta)!
That said, I will also start talking about “AI products” instead of “AI tools”, to make the point that those are products that we are being sold with the purpose of making a company profits, not tools that we are given for the best of humanity (or the world in general).
Inie, N., Zukerman, P., & Bender, E. M. (2026). De-anthropomorphizing “AI”: From wishful mnemonics to accurate nomenclature. First Monday.
Featured image: Same motif, different perspective as in this post