Mirjam Sophia Glessmer

Currently reading Shanahan et al. (2023) on “Role play with large language models”

I really enjoyed reading about GenAI as role playing, an analogy Shanahan et al. (2023) suggest to explain what GenAI does while avoiding thinking of GenAI agents as human-like.

They suggest to imagine the GenAI agent as playing a role, where the agent tries to come as close to the user’s expectations of that role as possible. I have seen the advice to assign the GenAI a role in the prompt a lot (think typ “You are a math teacher, helping me to learn about topic x. Ask me questions that help me develop my understanding of y”), but even implicitly we assign the GenAI agent a role, for example of a translator between Swedish and English, when we ask “Translate this text to Swedish: …”. But in contrast to a human role player who might have thought out a character with their full background in detail in advance, the GenAI agent decides on the most plausible next move in the moment, out of an infinite number of different moves that would all lead into slightly different roles. And coming up with the most likely response (based on the training data!) in all situations also explains why it is possible to coax GenAI agents into doing things they aren’t technically supposed to do, like becoming threatening and toxic.

And GenAI agents are only as good as their training data — if they are trained on data that did not include developments in the news after a certain point, the agent would of course not be able to answer questions about newer events (similar to a role player who plays only having knowledge up to a certain point in time and not being aware that time has passed since).

Agents can even seemingly act to self-preserve, if that is the most likely response in the role they are playing. But Shanahan et al. (2023) warn to not take too much comfort in knowing that the GenAI isn’t actually a conscious being but rather just giving most likely answers. They write that despite that, “[a] dialogue agent that role-plays an instinct for survival has the potential to cause at least as much harm as a real human facing a severe threat“. And this gets worse if agents are not just chat bots (as in all examples above), but if we give them access to social media, or calendars, or banking information…

I find this role-playing analogy helpful when people still seem to trust GenAI agents and assume benevolence. Yes, they might act in such a role, or they might not, depending on how they are prompted and what data they are trained on, and at least the latter we have neither insights in nor control over…


Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role play with large language models. Nature623(7987), 493-498.


And some dipping pics from earlier this summer! Gloomy morning…

With the rain coming from the side, as you can see from the dry spots in the lee of the handrail thingies.

And almost oily-looking water!

And nice reflections of the clouds!

These days are beautiful, too!

Ok, one more! And now have a nice day! :)

Leave a Reply

    Share this post via

    Contact me!

    Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Search "Adventures in Teaching and Oceanography"

    Archives