Mirjam Sophia Glessmer

If we compare GenAI to *humans*, that begs the question “which humans?”

Such an interesting question! Here are three pieces of writing that are currently on my mind.

The first one I read already a while back: Marcus Olang’s blog post “I’m Kenyan. I Don’t Write Like ChatGPT. ChatGPT Writes Like Me (which you should totally go read for yourself!). The point that they make, “ChatGPT writes like me”, is so powerful. Of course, the ways in which GenAI write did not just magically disappear out of nowhere — they are based on training data (pretty much all electronic texts that are available somewhere on the internet, copyright nor not) and on people training it. Both the electronic texts and the people training the models also use language based on years of training, being drilled on pretty much one version of what is correct and what is not (although which version that is depends on where you learn English). In many cases, for example for Kenya, the version of desirable English comes with a long history of colonization and dominance of British English even after independence, so being told that your best writing after all that history is not human hits differently. Olang writes “It is a new frontier of the same old struggle: The struggle to be seen, to be understood, to be granted the same presumption of humanity that is afforded so easily to others. My writing is not a product of a machine. It is a product of my history. It is the echo of a colonial legacy, the result of a rigorous education, and a testament to the effort required to master the official language of my own country.

Another perspective on “which human?” comes in the pre-print of Atari et al.’s article. They discuss that capabilities of GenAI are often measured against what “a human” could do, but they raise the valid question: which human? To investigate that question, they look at what kind of psychology GenAI represents and compare it to data from the World Values Survey (using methods that I did not understand). They find that outputs most resemble values commonly associated with Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies, which is not surprising looking at who produced most of the texts that went into the model for training. They conclude “Our findings suggest that “WEIRD in, WEIRD out” might be the answer, an important psycho-technological phenomenon whose risks, harms, and consequences remain largely unknown

And this morning on the bus, I came across Bhagat et al. (2025)’s article in which they investigate how different Indian cultures are represented in GenAI-generated stories. Using focus groups and surveys of people with lived experiences in those cultures, they find that 88% of the stories they generate contain misrepresentations, especially those on “minorized communities within minorized communities“, i.e. lower-resourced languages in rural areas in India. For example, a region might be described with “agricultural prosperity” when, in fact, it is often subject to droughts. Or clothing would be described that is not typical for the context, or names for aunt and uncle switched. Or stereotypes might be overused. Or even just logical errors, like describing how a conductor hands someone a paper ticket on a school bus, when school buses don’t use tickets. What is interesting, though, is that when they tested GenAI directly on the concepts that it got wrong before in stand-alone questions, it can answer correctly. They discuss that there are two failure modes: non-representation (when there are no specific cultural elements included) and misrepresentation (when information is hallucinated). They conclude “As AI systems become integrated into people’s daily lives, future research must investigate the impacts of their design and behavior on the experiences and the harms faced by diverse users.

I have no idea what the implication of any of this should be for how we work with GenAI, but I feel strongly that we need to share these studies and stories, and keep asking ourselves “which humans?” And maybe even more so which humans are not seen, are not- or misrepresented, are wrongly accused of things, and how can we make it right for them?


Atari, M., Xue, M., Park, P., Blasi, D., & Henrich, J. (2023). Which humans?. Version 1, https://osf.io/preprints/psyarxiv/5b26t_v1, https://doi.org/10.31234/osf.io/5b26t

Bhagat, K., Bhatt, S., Velagapudi, A., Vashistha, A., Dave, S., & Pruthi, D. (2025). TALES: A Taxonomy and Analysis of Cultural Representations in LLM-generated Stories. arXiv preprint arXiv:2511.21322. https://arxiv.org/pdf/2511.21322


Featured image was a happy accident of wanting to take a picture of the opera in Oslo, but additionally capturing the reflections of myself and the library in which I am standing. So many layers. So many reflections. So much to think about!

Leave a Reply

    Share this post via

    Contact me!

    Adventures in Oceanography and Teaching © 2013-2026 by Mirjam Sophia Glessmer is licensed under CC BY-NC-ND 4.0

    Search "Adventures in Teaching and Oceanography"

    Archives