Reading Time: 5 minutes

Erik J Larson has written a passionate attack on the current fashion for AI. For him, the myth of AI is assuming that the current AI-based utilities we have developed can be extended into AGI (artificial general intelligence). His arguments are quite compelling, although, as you will see below, I don’t think he follows his own recommendations.  

The argument is as follows. As the book’s subtitle states, “Why computers can’t think the way we do”, and he explains the difference between current AI tools and the way the human brain works. There are three types of reasoning:

  1. Deduction (for example, “All men are mortal – Socrates is a man – therefore Socrates is mortal”)
  2. Induction (the sun rose yesterday, and every other day in my experience, and so probably the sun will rise tomorrow)
  3. Abduction.

I didn’t know what abduction was (it is only explained in chapter 12, long after the term is used), but Wikipedia told me:

Abduction a form of logical inference formulated and advanced by Charles Sanders Peirce  in the last third of the 19th century. It starts with an observation or set of observations and then seeks the simplest and most likely conclusion from the observations.

This looks to me very close to induction, but it certainly describes very well Larson’s entertaining account of Turing at Bletchley Park during World War Two using a mixture of computing power (which Larson rather confusingly terms “ingenuity”) and intuition (that is, the human brain) to decode the German Enigma codes. For Larson, intuition is the key ingredient that computers cannot provide. The Enigma codes were cracked because the humans were able to provide intuition and common sense to interpret the scraps of evidence they discovered from chance finds.

Present-day AI

A typical AI-based algorithm in use today, for example, the well-known ability to differentiate pictures of cats from pictures of dogs, is a non-transferable skill. The machine learns to do it by being trained, but however good the machine is at distinguishing cats and dogs, the machine has learned nothing about, say, separating birds from fish. Nor can it use any real-world knowledge, for it has none. So I agree that present-day AI is unlikely to lead to AGI, or artificial general intelligence. As Larson states, “there is no algorithm for general intelligence”.

But is that so much of a drawback? When we think of the myriad ways in which a machine can be trained, for example to identify handwritten letters and numbers, with greater accuracy than a human, is this not a great achievement? Just as word processing removed the need for the typing pool, the ability of machines to decipher characters has removed a whole swathe of low-grade labour.

About Erik Larson

What I found intriguing about the book was when I learned more about the author. According to the very gushing Wikipedia article, which reads like a press release, Erik Larson has a PhD in philosophy and computing. He then “co-founded Influence Networks after developing an algorithm to produce web-based rankings of colleges and universities with funding from DARPA.” 

According to the Influence Networks website, this algorithm “computes “influence” and measures the relevance of generated content to a specific topic and the importance of people to the topic … The algorithm is the foundation for the InfluenceRanking Engine”, and Larson is listed as a member of the Academic Advisory Board of Academic Influence, a company that “builds objective, influence-based rankings to advance your education” (and presumably sells these rankings to you). As the title suggests, Academic Influence provides lists of influential scholars, best schools by subject, best schools by state, and many other lists: “Our team of academics and data scientists builds objective, influence-based rankings to advance your education.” I haven’t looked at the criteria used to calculate these lists, but they appear to be based around citations for the individuals associated with the various schools or courses listed. And there seems to be no limit to what they can calculate: there is even a list of the most influential people for the years 4000 BC to 2022 (a very American idea). Top of the list is Aristotle, with Jesus in third place and Marx fourth, with Muhammad only eighth.

Perpetuating the Myth of AI

I agree with Larson that if there is a myth of AI, it is  over-extending what AI can do. It is perpetuated by the idea that you can rank every person, every school, every course, and produce such seemingly certain results. Worse, the sloppy tools used to build the profiles result in obvious errors, so obvious they can only have been created by running an algorithm outside the core set for which it was designed. Why is Larson attaching his name as an academic advisor to a site that appears to prove the very point of his book, that people are making too many over-inflated claims for AI? Here are a couple of examples.

It appears that the entry for each scholar or thinker on the Academic Influence site is generated automatically, and comprises a number of subheads. For example, under the heading “why is this person influential?”  you are shown the person’s Wikipedia entry. Then you can see the number of citations of their works (which is at least a well-established metric for scholarly authors, even if untested for writers who lived over a hundred years ago), and lists some published papers. This format doesn’t really work for a creative writer like Shakespeare, who, as far as I know, never published any scholarly papers. Nonetheless, the Shakespeare page confidently (and ungrammatically) lists his “published papers” (which appear to be his sonnets and plays). I see that Sonnet 48 is ranked above The Tempest, and The Klingon Hamlet appears; but there is no entry for Hamlet. Did anyone look at this list?

I looked at the Karl Marx profile, but unfortunately, they seem to have the wrong Marx – the papers that are confidently listed as “Karl Marx’s Published Works” are all about magnetic resonance imaging, and appear to be by someone else. I don’t create the algorithms, but I have been working in AI for several years, and all too frequently, AI developers, in their enthusiasm for providing a machine-based solution, often over-extend the tool and fail to check that the results obtained make any sense.

Of course, I might be wrong – Marx may have written these papers alongside his political works. It’s just my intuition.