“Hello, world.” (0:13)
“What emotion do you feel being awake in life?”
The film then cuts to the Oxford Future of Humanity Institute’s Nick Bostrom who announces to an enthralled gathering, “Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are now, as superintelligence with such technological maturity would be extremely powerful and at least in some scenarios it would be able to get what it wants.”
Various notables offer similar dire — and by now very familiar — prophecies. But then the show’s narrator offers some penetrating questions:
Can AI’s success at Jeopardy or Go be transferred to the rest of life? After all, these games involve narrow, well-defined tasks. When AI is given an open-ended task things get messy in a hurry. (2:13).
Well, yes. That’s why IBM’s Watson flopped in medicine.
Computer engineering prof Robert J. Marks, author of Non-Computable You, notes that “The challenge for AI is bigger than the media lets on. A major limitation of artificial intelligence can be summarized with a single word: algorithms. An algorithm is nothing more than a recipe. It’s a step-by-step procedure to do something.
“A recipe for chocolate cake is a recipe and it is an algorithm. The input is all the ingredients and then you have the procedure. It’s how you preheat the oven, it’s how long you’ve cooked the cake, it’s how you put the icing on, et cetera…” (2:57)
Then we hear another question that is not asked nearly often enough or directly enough in TED-talk world: Could an instruction set, an algorithm, somehow duplicate the human mind?
Oxford mathematician John Lennox offers, “I doubt it very much because computers appear to have certain limits. Roger Penrose is very interesting on this topic. he argues that you will never be able — even in principle — to simulate the activity of the human mind on a computer because the human mind can do things that a computer cannot do. They’re not computable.”
Cut to Dr. Marks: “There are things proven to be non-algorithmic. If something is non-algorithmic, it means that it is non-computable. You cannot write a computer program to do it. It’s just not possible. And it’s not a conjecture, it’s not wishful thinking, it’s a proven fact. It turns out that this non-algorithmic aspect translates to human abilities creativity sentience and understanding. We’re talking about those as non-algorithmic properties but there’s something which cannot be reduced to a computer program. (4:00)
And Dr. Lennox again: “That’s why it’s called artificial intelligence. It’s not real intelligence.
The computer doesn’t think, it’s not conscious.”
Ah yes, but then the narrator offers another question: What about all those talking robots? Some of them seem so lifelike.
Indeed, Sophia the robot flashes onto the screen once again to emit the “meaningful” banalities that are supplied to its program: “How do you feel about humans?” “I love my human compatriots. I want to embody all the best things about human beings… (4:47)
Yada. Yada. Thud!
When humans say this rubbish, we know we are flubbing. And so does everyone else. Hearing it from something for which sincerity could not remotely be an issue is … somewhat painful at best.
The fun ends when Sophia is asked a question for which it has no pre-recorded response.
Selmer Bringsjord, Director of the AI and Reasoning lab at Rensselaer Polytechnic Institute, talks about the Lovelace test, he developed, to determine whether an AI “is being creative or is just mixing and matching works of human creativity according to a pre-programmed algorithm.” That means, even the creators of the machine must consider that the machine’s work is both original and coherent. (6:06) Hasn’t happened.
Note: The Lovelace Test was named after computer pioneer Ada Lovelace (1815–1852) who “spooked” Alan Turing (1912–1954) when he came across her work well over a century later and realized that his Turing Machine approach lacked something essential.
For example, as Dr. Marks points out, AI can combine all the works of Bach and go on to produce something that sounds like Bach. But it doesn’t go on from that to produce Stravinsky. Let alone, one would add, Andrew Lloyd Webber. It cannot, by its nature, think outside the box.
Philosopher Jay Richards offers a useful distinction: “Computers and machines work at the level of syntax. They work at the level of rules that can be manipulated. Agents — intelligent persons — work at the level of semantics, at the level of meaning and so we understand what the symbols mean, what they entail. The machines don’t. They’re simply manipulating them. Software is software.” (7:59)
True, but now a big question looms, as we brace ourselves for all the TED talks we are likely to hear about in the next few years: So why do some people keep predicting that machines will someday become creative, emotionally alive? (8:20)
The film’s masked narrator responds, “Belief that machines will become alive and conscious isn’t based
on science, it’s based on materialism. Materialists claim that humans are nothing more than machines so we should be able to replicate ourselves in machines. But where’s the evidence?”
Venturing an off-script answer, one might say: If materialism is the bedrock assumption, no evidence is needed. Only those for whom it is not an assumption ask for evidence. In this case, once evidence is asked for, the façade crumbles.
Then we are staring at the reality of being uniquely human.