The perfect robot?

One way to explain the current state of AI is to see it as two (or more) opposing views of the world. Strong AI is the goal of creating consciousness in a machine, as measured in the Turing Test: if you can’t tell if the answers to your questions are from a human or a machine, then you have created consciousness in the machine. In contrast, narrow AI concentrates on using machines to do very specific tasks, without any mention of the dreaded C-word.  

Neither completely replaces the other, but for much of the last sixty or so years, the two approaches have been in opposition. So much so, that one side flatly refused to have anything to do with the other, as Adrian Wooldridge, in his The Road to Conscious Machines (2019), revealed, perhaps unwittingly.

There are two or three points in the book where this opposition is revealed. Wooldridge notes that the terms “narrow AI” and “strong AI” are not used by adherents of each school:

although strong AI is an important and fascinating part of the AI story, it is largely irrelevant to contemporary AI research. Go to a contemporary AI conference, and you will hear almost nothing about it [p27]

I suspect that Wooldridge himself was trained in the strong AI school. That the strong and the narrow AI experts didn’t communicate is clear in his reference to Google’s acquisition of DeepMind in 2014:

I can recall seeing stories in the press about the acquisition, and starting in surprise when I saw that DeepMind were an AI company. It was obvious at the time of the acquisition that AI was big news again, but the idea that there was an AI company in the UK which I had never heard of that was worth (apparently) £400 million was utterly bewildering. [p96]

What I find surprising is that a senior academic of strong AI knew nothing about this narrow AI company or its technology.

Most tellingly is an anecdote by Wooldridge on trying to organize speakers for the European Conference on AI in 2010.

My job as chair included assembling the programme committee, and I was keen to include a good representation from the machine learning community. But something unexpected happened: it seemed that every machine learning researcher I invited said No … it seemed I couldn’t get anyone from machine learning to sign up. Was it me, I wondered? Or ECAI? Or what? I sought advice from colleagues who had organized the event previously, and others who had experience of other AI events. They reported similar experiences. The machine learning community, it seemed, just wasn’t very interested in the events that I thought of as, well, ‘mainstream AI’.

Yet the final third of Wooldridge’s book is concerned with this mainstream AI. In a book of around 160 pages of text, the final 60 pages are all about consciousness, the characteristic concern of strong AI; and I guess that most current exponents of narrow AI (including companies such as UNSILO) would have little interest in any of this content. Discussing the writings of futurologists like Ray Kurzweil, or Asimov’s three laws of robotics, or Robin Dunbar’s work on the size of the human brain, or trying to solve the intricacies of the trolley problem, are not topics of daily interest in the typical AI company of today. While the media tends to focus on the wild and the unreachable, the typical AI company focuses on identifying solutions to practical problems – and delivering them. There is an example in the book itself. The humble automatic vacuum cleaner, that hoovers your floor, is now part of everyday life.  In contrast, creating a general-purpose robot is nowhere near completion. As the author describes it:

I gained a personal insight into this problem as a young academic in 1994, when I attended the American Association for AI Conference in Seattle. I vividly recall my astonishment at the apparent incompetence of robots that had been entered in a ‘clean up the office’ competition … It seemed that even the best robots in the competition were comically slow and barely able to accomplish even an approximation of the task at hand. Of course, the problem was not with the robots, but with my naïve understanding of just how difficult such problems actually are.

Perhaps the real problem was trying to build a “strong AI” solution when a “narrow AI” with lesser goals for this limited, even if real-world, problem would have worked fine. Perhaps there is no need for a great schism: the automatic vacuum cleaner is the perfect robot – although it’s not a “conscious machine” (in Wooldridge’s terms) at all.