it may be that today’s large neural networks are slightly conscious
— Ilya Sutskever (@ilyasut) February 9, 2022
Well, “slightly conscious” is like being “slightly pregnant” or “slightly dead.”
While Sutskever didn’t name any specific developments, he was likely referring to huge natural language processing systems like OpenAI’s enormous GPT-3 which can translate, answer questions, fill in missing words, and generate fake news.
No thought process is involved. The system approximates vast masses of actual instances of language use. The more stereotyped the language use is, the easier it is to generate a similar document.
The unusual part of the story is that colleagues started telling Sutskever to quite making such foolish statements:
Sutskever faced a backlash soon after posting his tweet, with most researchers concerned he was over stating how advanced AI had become, Futurism reported.
‘Every time such speculative comments get an airing, it takes months of effort to get the conversation back to the more realistic opportunities and threats posed by AI,’ according to UNSW Sidney AI researcher Toby Walsh.Ryan Morrison, “‘I’m sorry, Dave. I’m afraid I can’t do that’: Artificial Intelligence expert warns that there may already be a ‘slightly conscious’ AI out in the world” at Daily Mail (February 11, 2022)
Emeritus computer science prof Thomas Dietterich accused Sutskever of trolling. Another colleague took to Twitter to post his response:
— Valentino Zocca (@ItalyHighTech) February 11, 2022
And another colleague offered an even blunter assessment,
“Ilya Sutskever, chief scientist of the OpenAI research group, tweeted today that ‘it may be that today’s large neural networks are slightly conscious’.”
It may be that Ilya Sutskever is slightly full of it. Maybe more than slightly.
— Michael Bolton (@michaelbolton) February 11, 2022
The problem, as Toby Walsh was quoted saying in the Daily Mail, is that there are many reasons to be concerned about how artificial intelligence will be used. But, most professionals say, the least likely source of worry is that it will become conscious (or malignant) all on its own. Incidentally, Walter Bradley Center director Robert J. Marks points out that even HAL was probably just obeying its programming when it acted as murderously as it did:
At one point on the trip from Earth to Jupiter, HAL becomes suspicious that the crew might be sabotaging the mission. HAL then purposely tries to kill all the crew. The most logical explanation for this act is a coding error. HAL was programmed to operate on the basis that the mission took priority over human life.”Robert J. Marks, “Could HAL 9000 ever be built?” at Mind Matters News
And one thing HAL could never do was depart from the programming.
At Futurism, we hear more about Sutskever’s perspective:
He’s long been preoccupied with artificial general intelligence, or AGI, which would refer to AI that operates at a human or superhuman level. During his appearance in the AI documentary “iHuman,” for instance, he even declared that that AGIs will “solve all the problems that we have today” before warning that they will also present “the potential to create infinitely stable dictatorships.”
This tweet, however, marks the first time Sutskever, who cofounded OpenAI alongside SpaceX CEO Elon Musk and the company’s CEO Sam Altman in 2015, appears to have claimed that machine consciousness has already arrived.
Even stranger is the fact that OpenAI was founded as a nonprofit meant specifically to curb the existential risks sentient machines pose — before, in an eyebrow-raising twist, diving into research trying to bring powerful AI into existence.Noor Al-Sabai, “OpenAI chief scientist says advanced AI may already be conscious” at Futurism (February 10, 2022)
AI can’t simply develop human consciousness because much, of not most, of human consciousness is non-computable.
Perhaps Ilya Sutskever has been staring at his creations too long. Alternatively, fear of fully conscious AI may be a modern superstition that replaces fears of witches.
You may also wish to read:
New AI can create — and detect — fake news. But how good is it at either task? We tested some copy. Will the predicted tsunami of fake news and advertising make much difference? Possibly, but in ways that might surprise you.
Why human creativity is not computable There is a paradox involved with computers and human creativity, something like Gödel’s Incompleteness Theorems or the Smallest Uninteresting Number. Creativity is what we don’t know. Once it is reduced to a formula a computer can use, it is not creative any more, by definition.