Reading Time: 2 minutes
Generative AI as The Terminator, wreaking havoc?

Metaphors help us make sense of things, that’s for sure. Without going deeply into Lakoff territory (he claims there are “good” and “bad” metaphors, for very idiosyncratic reasons), I found some of the metaphors commentators use when talking about generative AI to be very helpful. Here are some examples:

  • Back in 2018, Benedict Evans likened machine learning to “infinite interns, or, perhaps, infinite ten-year-olds”. That is perhaps being unfair to interns, but I appreciate what he meant. Then, in 2023, he compared ChatGPT to “an intern that can write the first draft, or a hundred first drafts, but you’ll have to check it”. I like this image, especially because interns so often provide a potential aid, but not more than that.
  • Julie Weed of University of Sussex compared generative AI to the man in the corner of the pub: always there, always with an opinion, but not necessarily a correct opinion, and not necessarily an opinion you would trust.
  • Jake Leaver of University of Glasgow provided a link to a wonderful phrase contained in the title of a paper by Emily Bender et al, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”. She described a language model as “a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”

There is even, perhaps unsurprisingly, an article (by Jason Lodge) on how metaphors are used to explain AI. He points out that it is inadequate to describe generative AI as a calculator (and similarly, several commentators compared the launch of generative AI with early home computers), but neither can they, nor should they, be compared with humans. At this point, most commentators (after the almost obligatory reference to The Terminator) begin to abandon metaphors, and go straight to specific capabilities or contrasts between generative AI and humans, for example:

  • Generative AI lacks agency
  • Humans have emergent properties (but so does generative AI)

But hang on! Above, we compared generative AI to an intern! Well, perhaps the implication is that interns are kind of sub-standard humans. This metaphor is, I think, a bit hard on interns. At least interns mean well, even if their output isn’t always very useful, while generative AI is quite amoral. I can’t help feeling that of all the categories of employment under threat from AI in the coming years, interns are the group most likely to suffer.