According to Hicks et al. (2004), “ChatGPT is bullshit”. And they make good arguments for it, too!

I have written about playing with GAI for certain purposes, most recently to “discuss” the development of a workshop when I had no person to discuss it with. But this article has given me new language (not just the “bullshit” word*, just keep reading) to talk about a highly problematic aspect of GAI.

Classically, people talk about GAI as having hallucinations when it just makes up stuff out of nowhere. But Hicks et al. (2024) argue that what GAIs are doing, is actually “bullshitting”, in a technical sense of the term. “Soft bullshitting” is when statements are made without concern for whether they are true or not — definitely true for GAIs who are designed to convincingly imitate human speech pattern, not to provide accurate information. “Hard bullshitting” is when there is an intention to deceive, and in that sense, GAIs are “bullshitting machines”, and using their output to deceive a reader into thinking that a text was written by a thinking human (while not caring about whether the text is factually correct) is hard bullshitting.

So why talk about bullshitting rather than hallucinations or confabulations? Hallucinations are about a non-standard perception of the world, but GAIs cannot “perceive”, hence also not “mis-perceive”. Also anthropomorphising a GAI makes it easier to blame responsibility on the tool rather than on the user who uses it irresponsibly. And the GAI making up stuff isn’t a bug in the process, it IS the process, even if the output ends up being correct. So what about confabulations? This is again making GAI appear human-like, and also it makes it sound as if the GAI generally is attempting to convey accurate information, but occasionally glitches and then makes up stuff, when it is making up stuff the whole time. The authors argue that using the language of “bullshitting” rather than hallucinating or confabulating is also important to counteract the overblown hype around GAIs.

In a nutshell: GAIs are ALWAYS bullshitting, whether they happen to get an output right nor not, because they cannot care about whether their outputs are accurate, yet, knowing that, they are designed to still produce text that looks as convincingly as possible.

Next time I have to teach about GAIs, I will ask people to read this article before we even start discussing anything!


*I love how my text editor marks “bullshit” as “probably offensive language” and says it does not have any suggestions for a less offensive synonym!

Featured image: A carousel horse in the sky, or at least that’s what my mom and I saw independently of each other. Hallucinations? Or bullshit?


Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5

4 thoughts on “According to Hicks et al. (2004), “ChatGPT is bullshit”. And they make good arguments for it, too!

  1. Ian McCarthy

    You might be interested in this. We argue it is not bullshit but “botshit”.

    As per Table 2 in the paper, and slides 12 and 13 in the deck (links below), botshit is generated when users uncritically use chatbot content contaminated with hallucinations.

    Here is the publisher’s version of the article that coined the term ‘botshit’:
    https://doi.org/10.1016/j.bushor.2024.03.001

    Here is a pre-print free version: https://dx.doi.org/10.2139/ssrn.4678265

    And here is an accompanying slide deck: https://tinyurl.com/4jf7jcm6

    Reply
  2. Pingback: Currently reading about "botshit", and how to avoid it (Hannigan, McCarthy, Spicer; 2024) - Adventures in Oceanography and Teaching

Leave a Reply