Reading Time: 4 minutes

I’m a great admirer of the Very Short Introductions: clear, concise outlines of a topic by an expert in the field. Well, this book is certainly very short, but how good is it as an introduction?  Readers hoping for a good exposition of present-day AI may be disappointed by the rather cryptic style.

The book has no introduction, but in the Further Reading section, the first item listed is a Margaret Boden’s Mind as Machine: A History of Cognitive Science, which was published in 2006 (Boden was born in 1936). It is probably true that more papers have been written about AI since 2006 than all those written up to 2006. In other words, the subject has exploded in the last ten years or so, and it is unlikely that a 2006 volume would cover all this. It’s not impossible, but perhaps not so likely that current developments in AI are not being tracked so thoroughly. Nonetheless, Boden writes: “With the exception of Deep Learning and the Singularity, every topic mentioned in this VSI is discussed at greater length in Mind as Machine.” (see the References section at the end of the book, where for each chapter of this book the relevant section of Mind as Machine is referred to.). You could say that Deep Learning is the biggest change in AI in the last twenty years, in fact that it permeates most of the present-day discussion of AI, so it certainly warrants more coverage than is given here, in chapter 4).

Boden has a very brisk and conversational style, which I found part of the problem; not for her the need to define terms. She quite rightly begins with a definition of AI itself:

Artificial Intelligence (AI) seeks to make computers do the sorts of things that minds can do. [p1]

While that sentence may be correct for some users of the term, it by no means covers all current uses of the term, and in any case is not a definition. It simply begs the question: what are “the sorts of things that minds can do”? We can imagine plenty of things. Have dreams. Tell stories. Imagine War and Peace and the Sistine Chapel ceiling. Predict the future. Learn philosophy. Does AI do all these things? This phrase “seeks to” makes the definition useless. Imagine defining the human brain in the same way:

The human brain seeks to understand the meaning of life.

It might seek to, but it hasn’t got anywhere near it yet. And, I have to say, many human brains don’t even attempt to understand the meaning of life.

Boden’s attitude to definitions elsewhere is similarly unhelpful. Her style is frequently to mention things and to give a subsequent chapter reference instead of explanation. How are we, the readers, to find the reference? This kind of forward reference is maddening. For example:

The Logic Theory Machine and General Problem Solver (GPS) were early examples of GOFAI. They are now ‘old-fashioned’, to be sure. But they were also ‘good’, for they pioneered the use of heuristics and planning – both of which are hugely important in AI today (see Chapter 2). [p10]

What is heuristics and planning? They are covered in the next chapter, but does it help to introduce them in chapter one without explanation? Page 11 introduces the terms “local” and “distributed”. I can guess what these might mean, but the general reader who has never encountered these terms will be none the wiser by this mention.

What are we to make of the following sentence (still in chapter one, What is artificial intelligence?)?

only four years after their first ground-breaking paper, they [McCulloch and Pitts] published another one arguing that thermodynamics is closer than logic to the functioning of the brain. Logic gave way to statistics, single units to collectivities, and deterministic purity to probabilistic noise. [p11]

At this point, my brain simply starts throbbing and I give up trying to follow what is being discussed. This is one of those books where attempting to understand every word when you first read it is a waste of time.

This book covers a lot of ground. Looking at the index (which is not complete), there are around 378 topics indexed, for 150 pages of text – more than two topics per page. Compare this with the Oxford Very Short Introduction to History, which has around 180 topics – fewer than half. I don’t think history is a smaller topic than artificial intelligence. But it’s not just the number of terms included in the book that makes the book challenging; it’s the casual way the book introduces new concepts. Hence, on page 17, the phrase “deep learning” is introduced as something that would usually be thought of in contrast to symbolic logic. But deep learning is only introduced and defined on page 42.

On page 15, the big split in AI between symbolists and connectionists is introduced. But the first mention of the term “connectionist” in the book is alongside “cybernetics”:

the feeling on the cybernetic/connectionist side began as a mixture of professional jealousy and righteous indignation. [p15]

So are the two terms “cybernetic” and “connectionist” the same thing? From this point, the word “connectionism” is used, without definition, and the index reference for “connectionism is simply a cross-reference to “neural networks”. It’s only later in the book, and from reading other titles, that the well-known division of the two camps as “connectionist” and “symbolist” become clear, and it would be helpful to follow the same labelling here.

In other words, the book suffers from a lack of clear logical organisation. Terms are not defined properly, they are used many pages before they are explained, the index doesn’t indicate where definitions are to be found – it’s a nightmare. No student could learn from this kind of presentation; in fact the only way I was able to complete this book successfully was to look up a definition somewhere else for every new term I encountered before proceeding further . Imagine a robot text analyst trying to read this book. It would throw up its silvery hands in despair at the lack of a proper organisation. Now, if a machine had been in charge, things would have been very different …