The other day I read something (that I cannot find again) along the lines of “GenAI creates art for people who hate art, music for people who hate music, reading for people who hate reading”, and I have been thinking about that a lot. I have explored what GenAI can and cannot do (for example regarding discussing workshop planning, but also to help with analysing qualitative data [don’t use it — we explain why in this article Rachel Forsyth and I just published]). I have never used it to create “art” or to write for me, and that is because both graphical and written forms of expressing myself feel very personal and very important to me and I cannot imagine delegating either, not even to professional artists or writers. Unless you can read my mind and do EXACTLY what I envision, stay away from my writing and art! That said, I pre-ordered Mark Carrigan’s new book, “Generative AI for Academics”, already last summer. Similarly to his previous book, “Social Media for Academics“, it seems a bit risky to read an actual, printed book on such a quickly changing technology as GenAI, but I found that it takes a big enough perspective that at least the current landscape still seems well described.
The most important take away for me is to think very carefully before you start using GenAI. Carrigan writes “It occurred to me in the later stags of writing this book that I might have felt differently about the ethics of GenAI if I had thought deeply about these issues prior to using it” (p 61). And a warning in a different context, on using GenAI to help find and summarize literature: “If you think there is a risk that you might slip into such behaviour, avoid asking conversational agents to help you find research literature” (p 81). While I think that having taken on a habit does not mean that that habit cannot be broken when there are good reasons for it (see for example “the extent to which academic travel had been normalised is increasingly experienced as a problem by many, ranging from the environmental impact of the conference circuit to the challenge of work/life balance when always on the move” (p 86), which many of us have gotten out of again), GenAI does seem to have an addictive effect on many, and there are no long-term studies of those effects yet. So before we let ourselves get addicted, there are a lot of things we should consider!
There are also several dilemmas related to using GenAI that we need to position ourselves to.
Even though GenAI is going to get better and better, there will always be
We need to use GenAI more in order to learn to use it better and avoid harm within, for example, education, but that also means more conversations, more energy usage, more water usage, … I really enjoyed this summary by Jon Ippolito. Whether or not the actual numbers are correct or not, I think especially his first two take-aways are worth always keeping in mind:
1. “Lack of transparency by AI companies means usage calculations at this point are only estimates.” It is not in the companies’ interests to disclose exact numbers. As also pointed out in the Spiral of Silence, there are actors who benefit from misinformation, so we need to be very cautious even with the few numbers that are provided by companies.
2. “Water and energy impacts are extremely localized; eg the stress on Ireland’s water and grid is much higher than Norway’s due to the latter’s hydropower and cool climate.” And then of course hydropower might be a good option of energy generation in terms of carbon footprint, but comes with a lot of negative consequences for biodiversity.
Maybe we need to consider a more distributed expertise in GenAI, where not everybody needs to be using it and becoming proficient? Anja Møgelvang et al. (2024) point out that engagement with GenAI does not just impact an individual’s job chances, but since people who are most critical also engage the least, this influences career prospects of women in general while their critical and more cautious voices are missing in society and especially in places that are on the forefront of using GenAI. So we definitely need to make sure that there is diversity in who uses GenAI. But similarly to how not everybody needs to be able to diagnose diseases themselves, or build their own violins, either, and we could attempt to keep diversity while reducing the overall number of users by not all jumping on the band wagon?
Tully et al. (2025) found that people who have less GenAI literacy, who have less knowledge and understanding of how GenAI works, despite higher concerns regarding ethics, capabilities, fears, at the same time perceive it as magic and are more likely to adopt it (and that is counter-intuitive not just for me, but also to the majority of people investigated in 6 studies presented in the article). In Tully et al. (2025), they discuss that it is therefore easiest to market GenAI to people who are more naive about how it works (but warn that this shouldn’t be exploited). They also state that the feeling of magic might decrease as people overall get a better understanding of GenAI, much like when the magic disappears once a trick is revealed. So maybe demystifying and regulating are steps to consider (not just for environmental reasons, also for mental health reasons).
Yep.
Carrigan suggests six “principles for experimentation” (p 38f)
Carrigan says that when the Web 2.0 first came up, people were discussing social media highlighting mainly opportunities and ignoring obviously problematic aspects, in a similar way to how we seem to be discussing GenAI right now. Of course there need to also be experiences based on which we need to then refine our thinking. As he writes, “rules are quick, norms are slow” (p 163).
He also suggests to strategically approach GenAI and think about “finding practices which help realise opportunities while reducing exposure to the risks” (p 27). Carrigan suggests to not focus on “keeping up”, but rather “focus on being clear about what you need to know and why“, and looking at existing workflows that we take for granted to see where GenAI could contribute meaningfully.
More specifically, Carrigan suggests “questions worth asking when confronting a potential practice” (p 104)
Carrigan suggests to not think about GenAI as tools (i.e. means to achieve an end, and an end that isn’t important enough for us to do personally and well, be it for time pressure, lack of interest, or any other reason) but rather as interlocutors, partners in discussion. As a discussion partner, GenAI “forces” us to articulate what we want in more and more nuance, to which the GenAI then responds better and better. To create high-quality output, we need to first create the high-quality input, often consisting of several examples of what kind of output we want, or at least describing it in detail (one interesting fringe thought I had here was that also being forced to make it very clear what you already know so that GenAI doesn’t tell you certain types of stuff is potentially an interesting way of becoming aware of all the implicit assumptions we make about what students already know). The point is to “find ways to think with GenAI rather than using it as a substitute for thought” (p72).
He gives several suggestions for how to think, collaborate, communicate, and engage together with GenAI. I am only highlighting the aspects below that I found meaningful to myself. There are many applications that I would not even consider because, as I described in the beginning, it matters to me that my words are my own words in my own voice, and similarly for art work.
How do we think? Carrigan describes a process that resonates with me a lot: Think about something deeply for a while, and then go away and let my brain do the work in the background. Writing “fringe thoughts” down captures them and lets them develop into something else (this blog being my favourite example — I am sometimes amazed what I wrote about a decade ago and had completely forgotten, but then suddenly build on again). But here he describes that working with GenAI can help keep the record and build on it later. Probably true.
To people who argue that conversations that further thinking should happen between actual humans, not a human and a LLM, Carrigan would probably say something along the lines of talking to people is also not perfect! They are not always available to us to have the conversation, but even when they are, they are often preoccupied with something else and so their input to the conversation might not be of the quality we had hoped for, either.
As to navigating literature with the help of GenAI, Carrigan reminds us that even Google Scholar uses AI for its algorithm for what we see! We just tend to forget that.
GenAI can be a support of human conversation — taking notes, distilling key points, giving feedback on the clarity of messaging, and many more.
As for collaborative research, it felt like Carrigan wrote our article before we did (but then we didn’t know, because our article was accepted and finalized before this book came out…): “[LLMs] reliance on statistical associations means the inferences they make about data will be unavoidably precarious, as likely to hallucinate about the data as with any other domain of activity. The black box character of these systems means the reason for these results will always be opaque, creating difficulties for reproducibility as well as creating the need for human oversight […]. Furthermore, there is a conventional bias to their inferences reflecting their reliance upon training data. […] there are reasonable grounds to assume the nature of their statistical inference will tend to squeeze out interpretations which don’t fit with past trends.” (p 96f, see also this post’s featured image which is the photo of the relevant pages with that text marked, which I sent to Rachel when reading the book)
But there are of course other ways to use GenAI to support the research process, including reviewing, challenging and prompting reflection.
I can really not relate to the idea of using GenAI to communicate in my place because I am so focussed on having my own voice, so for me the most important aspect here is the warning of the danger of a coming communications environment “in which there will be continual waves of GenAI spam and fraud, which automated defenses might temporarily catch up with but never overcome in a sustained way” (p 125). I believe that personal relationships and trust in authors will become a lot more important to know that no GenAI has been used, or be really clear about what it has been used for. In closing, Carrigan asks what if I, the reader, discovered now, at the end of the book, that it had been written entirely by GenAI? Carrigan suggests questions to ponder: “Would you feel cheated? Would you feel it had devalued what you had read? How would that change your inclination to talk about or cite the text?” (p 160)
That said, maybe there are forms of communciation where it is more important than in others to make sure that everything is in the author’s own voice. One example is that translating work between formats, for example a book into lecture or blog post could easily be done by GenAI. GenAI can take the whole or parts of the workflow, and you choose which parts! Carrigan writes about uing GenAI for a first draft of summaries of his own presentations for slides when it is work that he has talked about a lot before, but not for presentations of things he hasn’t presented on before because “this feels like it would be substituting the system’s voice for my own“.
Carrigan refers back to the 4 question for social media use that he has previously recommended:
As mentioned above, GenAI is relatively good at translating materials between different formats, and maybe asking for a first draft of our own materials into something else (while we have prescribed the who, what, why, and how) is not such a bad idea. As pointed out above, translation can take an enormous amount of time, and we are maybe just not aware of it because that is how it has always been.
This is the end of my notes on the book, and I just want to come back to all the reasons for why we should very seriously consider NOT using it, in addition to the obbvious environmental impact and the difficulties regarding non-reproducibility, hallucinations, etc, e.g. the working conditions of the people who do the training, biases trained into the model, lack of transparency about the whole black box itself, needing to rely on the benevolence of the companies and training, trusting that the models have not been trained with some specific agenda (and looking at how algorithms in some social media channels are going these days, is benevolence of companies ever a valid assumption?).
I really liked this quote and want to end with it: “The future is already here, it’s just not very evenly distributed” (p 164, citing William Gibson)
Carrigan, M. (2024). Generative AI for Academics. SAGE.
Møgelvang, A., Bjelland, C., Grassini, S., & Ludvigsen, K. (2024). Gender Differences in the Use of Generative Artificial Intelligence Chatbots in Higher Education: Characteristics and Consequences. Education Sciences, 14(12), 1363. https://doi.org/10.3390/educsci14121363
Tully, S., Longoni, C., & Appel, G. (2025). EXPRESS: Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity. Journal of Marketing, 00222429251314491.