Mirjam Sophia Glessmer

When Communities of Practice and GenAI collide

I was very intrigued when I came across a chapter by Cambridge, Wenger-Trayner, et al. (2024) on “Theoretical and practical principles for generative AI in communities of practice and social learning”. They tell the story of how a contribution by ChatGPT — despite being prompted by a participant in a group conversation, and read out by them — got treated so different from how a human contribution would have been treated. While the latter would have been at least politely considered, the former got instantly dismissed — a reaction that I can very much relate to.

I recently had a similar experience. My colleague and I were brainstorming, but we were engaging much more with ideas that we came up ourselves than with the long lists that ChatGPT offered. Possibly because whatever we chose to bring to the conversation had already gone through an internal quality check within each of us, and therefore seemed worth considering by the other? Or because the ChatGPT lists were way too long to deal with? Anyway, I thought it was an interesting experience, so I was also curious what Cambridge et al. (2024) would make of their observation and if it might help with interpreting mine.

They say that GenAI can never become “genuine participants in social learning“, because “social learning entails more than just a body of knowledge or set of best practices. It involves a mutual experience of meaningfulness“. An experience that GenAI obviously cannot share.

Meaning is negotiated through participation — engagement, connection with each other — and reification — turning meaning into things. While GenAI has access to enormous amounts of reified meaning, it cannot participate since it cannot care: “Active participation in social learning is the result of caring to make a difference. For something to be meaningful, people need to care about it“. Therefore, when it creates new products, they are also empty of meaning. (There are, however, many other uses of GenAI beyond production of new meaning acknowledged in the chapter!) But since GenAI is trained to produce texts that appear plausible on a surface level, it can also be confusing since they appear to carry more meaning than they actually do, and because they appear to imitate humans so well that it is sometimes hard to tell them apart: “the mimicry of participation requires vigilance“. With other humans, we have trained all our lives to judge their contributions for their truthfulness and meaning. With GenAI, we have not have the chance yet to train similar skills, “not only are we less skilled at assessing their input but, if we are not vigilant, we may also end up applying heuristics that generally work with humans but are a poor fit for AI“.

Then there are all the other problems with GenAI. They are black boxes; the way they treat intellectual property is questionable at best. Then, there are many questions related to accountability, for example around their creators, their motivations and to whom they are accountable; around procedural accountability of training, maintenance, ethics; and its behaviour in conversations with regard to adherence to typically accepted norms.

Based on these considerations, Cambridge et al. (2024) suggest a double-loop of learning to work with GenAI.

In Loop 1, social learning participants establish shared assumptions and ground rules. They discuss shared mental models of what a LLM is and does, how it has been designed and trained, who it is meant to serve and who it might harm. In the next steps, participants agree on how to use a LLM and how to ensure that it contributes to healthy group dynamics and power balances in the group. Lastly, participants agree on transparency and on de-personifying the tool, before they move into Loop 2.

In Loop 2, social learning participants achieve critical distance during the iterative processes of engaging with the AI by making sure they are using it collectively, carefully and critically. If anything happens that suggests that there are any problems with the way GenAI is used, the group needs to return to Loop 1.

First off, this sounds very sensible. I am wondering what that would look like in practice, though, because of course GenAI is not the only thing that needs attention, so do group processes, innovation cycles, all kinds of other stuff. So how explicitly would one work with this model over time? Maybe run through it explicitly once or twice, and then assign it to someone to keep an eye on similar to other roles, like timekeeper or notetaker, and whenever they (or anyone else) notices something, bring it back to the forefront? And put it on the agenda to revisit a couple of meetings later? Maybe it needs to become an item on the regular check-in list for when everybody also revisits ongoing and outstanding tasks, role distributions, etc?


Cambridge, D., Wenger-Trayner, E., Hammer, P., Reid, P., & Wilson, L. (2024). Theoretical and practical principles for generative AI in communities of practice and social learning. In Framing futures in postdigital education: Critical concepts for data-driven practices (pp. 229-239). Cham: Springer Nature Switzerland.


Absolutely no connection of the blog post with the featured image or the images below, which are first of wave watching, which I did during the first half of Saturday’s dive session in Limhamn, and then later of the diving half.

Here, you see a ship’s wake and the wake of two freedivers swimming in the surface in front of Öresund bridge.

I thought that wave was so beautiful — I love the clear lines, relatively large amplitudes and long crests of the waves!

And I got the rare opportunity to take pictures of freedivers from above!

It does look a little bit funny, but look at the nice wave rings!

And here one of them went diving, so there is only the other one left in the surface. Just like that.

And both are back!

And now they are swimming and making some more waves themselves.

One last beautiful ship’s wake…

And then it was my turn! So fascinating how the Öresund can look so blue from the top and then immediately so green once you are below the surface. But it is actually quite interesting to dive there, lots to see!

Love all the different structures of kelp and algae and whatever it is that is growing here!

And always so beautiful to see light streaming through the water, the waves on the surface…

I also find reflections of algae in the surface very cool, and also the contrast between the structures.

And I love jellyfish! I really do, they are so pretty!

They are so elegant, I could watch them for hours!

And did I mention I like the stuff that is growing there?

Oh, and we met lots and lots of fish! Below you see the big one, but all those little specks that look like falling dust or something are tiny tiny fish!

More fascination with bio stuff…

Bio stuff and reflections, and in the upper third in the center you can see a tiny bit of red on the hood of my wetsuit (I think). Closer picture of me as featured image of this post, thanks, Wivi!

But more light and reflections in the surface.

And bio textures.

And a really long eel, can you spot him?

And a last picture of bio stuff and reflections in the surface. I just love being under water!

Leave a Reply

    Share this post via

    Contact me!

    Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Search "Adventures in Teaching and Oceanography"

    Archives