Marcus de Sautoy

There has always been an outer reach of computing, often where it drifts into science fiction. Clearly, this occupies many people’s imaginations, although perhaps, less so the minds of computing professionals. Surely, on the principle that any publicity is good publicity, this would be good for the adoption of AI?

Unfortunately, these rather wild ideas, while fascinating for blue-sky speculation, are of little value in academic publishing, except perhaps in identifying some limits that might be useful to establish what AI can and cannot do currently. Perhaps they also reveal how humans want to be excited by a particular flavour of far-reaching technology. I am constantly surprised by the way that people are so easily impressed by what they think to be highly sophisticated AI. The chatbots Alexa and Cortana provide brief entertainment when asked to tell a joke, or to execute a simple instruction, but beyond that, their ability to engage in any kind of sustained dialogue is almost zero. This is because they are very primitive systems. For example, Alexa tells you a joke when you ask for one. And it only tells you a joke if it recognises your request. This is reasonably simple for Alex to do, because there can only be a few ways in English to ask for a joke; “Please tell me a joke”? “I’d like to hear a joke”, and so on. It would not be difficult to defeat these systems by using a less common word, such as “witticism” or “wordplay” instead of “joke”.

There was a pioneering device, Eliza, built during the 1960s – there is a good summary of Eliza in the Wikipedia article about it – designed to capture medical symptoms. Eliza appeared startingly human because it could be configured to answer any question with another question, in much the same way some medical interviews take place, for example:

Interviewee: My boyfriend made me come here.

Eliza: Is it important that your boyfriend made you come here?

Interviewee: It’s true. I am unhappy.

Eliza: Can you explain what made you unhappy?

Of course, the system here is identifying the term “unhappy” and responds with a question, using a few simple syntactical transforms. What is so surprising is that the public believed it to be intelligent.

It seems to be lack of knowledge even of basic AI tools that result in these wildly inflated expectations for what AI can achieved, followed by equally irrational disappointment when the AI delivered did not come up with a full diagnosis.

Given this low level of AI, would it not be an idea to explain to the public how little is involved in the interaction? Usually the people communicating the wild ideas about AI know little about its procedures, but this is not always the case. For example, the mathematician Marcus de Sautoy has written a book, The Creativity Code: How AI is learning to write, paint and think. As Professor of Public Understanding of Science at Oxford University, he would be expected to know his stuff, and perhaps it is not for me to criticise his works of popularisation. But de Sautoy pulls no punches in his blurb for this new book, very elegantly presented on his Oxford University home page:

From the Turing test to AlphaGo, are there limits to what algorithms can achieve, or might they be able to perfectly mimic human creativity? “Why could a machine one day not create a truly original work of art? The answers, in this compelling and thought-provoking book, can be found by breaking down what it actually means to be creative.”

It’s certainly popular science, but perhaps not the kind of science that

De Sautoy was expected to promote when appointed. The current state of AI does not mimic human creativity, although it might indicate links and connections between statements that a human can then investigate further – not quite creativity, and certainly not creating artistic works.

Alongside the blurb for de Sautoy’s book, there is some very favourable quotes about the book, such as this one from Jeanette Winterson:

“Fact-packed and funny, questioning what we mean be creative and unsettling the script about what it means to be human, The Creativity Code is a brilliant travel guide to the coming world of AI”

It’s not surprising, perhaps, Winterson is the author of a recent novel based on an update of Mary Shelley’s Frankenstein (her version is called Frankisstein). Who knows, perhaps her reading of The Creativity Code inspired her to write her new novel.

All credit to Ms Winterson and her creative intelligence; but in terms of advancing the implementation of AI, not very useful and perhaps even counter-productive. This is the kind of wild blue-sky thinking that does not help the current implementation and acceptance of machine learning. If we are talking about practical AI tools in publishing, because a system can create a very limited form of language (in the above example, changing the statement “my boyfriend made me come here” to “is it important that your boyfriend made you come here”) does not mean it can now write Hamlet. Many current examples of AI are limited in a similar way to be very close to the input provided.

Rule-based AI, as exemplified above, is a current implementation of AI that has undoubted uses, but which is a long way from a thinking machine. In a rule-based system, you, the human, give the machine some examples of the kind of phrases you are looking for:

I have got a temperature.

My temperature is very high.

I have a fever.

And so on. Note, however, that all the inputs (this is called collectively the training set) are defined by humans and can never go beyond the accuracy level of what humans have identified. At best, such a system will approximate to the level of the human input, albeit running a bit faster. If a human comes up with a new way of describing their temperature, the machine will fail to recognise it (“I feel I’m almost off the Richter Scale!”) Unsupervised AI, the machine learning that identifies significant terms and concepts for itself without human input, has a much greater potential, but it is far less widespread and less understood.

Whatever the kind of AI used, to identify different kinds of AI, and to see which of them is the most appropriate tool for the job, is perhaps a more mundane task that dreaming up new monsters, but far more immediately valuable – perhaps you could say even contributing to public understanding of AI.