Fritz Lang’s Metropolis (1927): the robot and its creator

I find histories of artificial intelligence very revealing. Computing as an academic discipline is relatively recent, but AI in particular has always been a cross-over subject, something that has been studied by academics for over 75 years, since Turing in 1945, but which at the same time attracts a general interest, much of it rather far-fetched and influenced by science fiction – as Wooldridge points out, much of the general public’s ideas about AI are based on the concept of a thinking robot such as the one created in Fritz Lang’s Metropolis (1927). But academia and the general interest are not entirely distinct. The public’s widespread and ongoing fascination with the potential of AI impacts, however subconsciously, on the academics who are supposed to be more pragmatic and realistic in their expectations. General histories of AI, of which Wooldridge’s The Road to Conscious Machines (2019) is the latest example, reveal wild fantasies of many of AI’s most respected practitioners, which on reflection could very likely have emerged from a sci-fi movie.

Even the authors themselves of general introductions do not seem to be immune from rosy-eyed optimism about some of the more madcap areas of AI. Just as Pedro Domingos in his The Master Algorithm (2015) (which I reviewed here) dreams about the creation of one algorithm that would solve all problems, the otherwise level-headed Mr Wooldridge has a fondness for “general AI”, or “Artifical General Intelligence” (AGI).

AGI roughly equates to having a computer that has the full range of intellectual capabilities that a person has – this would include the ability to converse in natural language (cf. the Turing test), solve problems, reason, perceive its environment and so on, at or above the same level as a typical person.

Wooldridge immediately points out that nobody at AI conferences these days talks about general AI; they focus exclusively on much smaller goals (“narrow AI”), where most of the achievements of AI have been concentrated in the last few years. Surely this suggests to the author (it certainly suggests to me) that the attempt to create a general AI is impossibly ambitious?

Wooldridge seems to prove this point in several of his examples. His book is well-written, generally intelligible to non-experts (of which I am one) and enjoyable. His judgement seems very sure; in one of the best passages of the book he describes a meeting between Doug Lenat, inventor of the Cyc (pronounced Syke, not sick) Project, and Vaughan Pratt, an academic from Stanford. Lenat’s version of AI has been dubbed knowledge-based AI: it comprised capturing all the knowledge of the world into a series of statements, so that the system could then use logical reasoning to infer solutions to problems. Those statements were to include, in Lenat’s words, such things as “animals live for a single solid interval of time, nothing can be in two places at once, animals don’t like pain”. But after Lenat had laboriously compiled over half a million statements in ten years, Vaughan Pratt was able to show the inadequacy of the Cyc knowledge base with a few simple questions. The system could not identify that bread is not a drink, or that the sky is blue.

To us, it seems clear that the atttempt to capture all the world’s knowledge was Faustus-like: foolish and misguided, quite apart from falling into the fallacy that all knowledge could be coded as a series of statements, without any disagreement.  Yet instead of condemng the Cyc project outright, Wooldridge goes out of his way to praise the initiative:

What can we learn from Cyc about the road to general artificial intelligence? If we ignore the inflated expectations, then Cyc stands up as a technically sophisticated exercise in large-scale knowledge engineering. It didn’t deliver General AI, but it taught us a lot about the development and organization of large knowledge-based systems. And to be strictly accurate, the Cyc hypothesis – that General AI is essentially a problem of knowledge, which can be solved via a suitable knowledge-based system – has been neither proved nor disproved yet. The fact that Cyc didn’t deliver General AI doesn’t demonstrate that the hypothesis was false, merely that this particular approach didn’t work.

Perhaps there is something in general AI that appeals to the mind of the IT developer. After all, what is a programming language, but an attempt to convert natural language to a series of unambiguous statements that a machine can follow? One view of writing software programs, computer code is an attempt to resolve the world’s maddening ambiguity by certainty. That so many initiatives to capture the world’s knowledge foundered is not seen as the labour of Sisyphus, doomed to repeat his work in perpetuity without ever completing it, but simply one initiative that hasn’t (yet) succeeded. It seems a very optimistic view.