The idea that artificial intelligence could ever become actually “intelligent” is a minority view, but it’s espoused by some brilliant minds, including Jason Lemoine, an ex-Google employee who claimed the company’s developing AI system was sentient. Lemoine isn’t alone.
According to Futurism, OpenAI’s top researcher, Ilya Sutskever, claimed in a Tweet this week that “large neural networks are slightly conscious.” Noor Al-Sibai writes,
He’s long been preoccupied with artificial general intelligence, or AGI, which would refer to AI that operates at a human or superhuman level. During his appearance in the AI documentary “iHuman,” for instance, he even declared that that AGIs will “solve all the problems that we have today” before warning that they will also present “the potential to create infinitely stable dictatorships.”-Noor Al-Sibai, OpenAI Chief Scientist Says Advanced AI May Already Be Conscious (futurism.com)
It’s baffling to hear these top AI figures hail their creations both as the world’s savior and its potential grim reaper. The average citizen might wonder whether planetary domination via AI is worth the risk. Some more steady, balanced perspectives, however, might temper the hype.
What is Consciousness?
Computer engineer Robert J. Marks is one of those sane voices and reiterates often that AI will never achieve human-like intelligence or consciousness. Computers run on algorithms and their functions are qualitatively different than human emotion and thought. Perhaps such debates, however, are stuck in semantic confusion. What does it really mean to be “conscious?” That alone is an ongoing question that mystifies and intrigues philosophers and scientists alike. Can it be quantified via material means? Could we ever pinpoint the “consciousness spot” in the brain? Ultimately, Sutskever’s claim of AI’s impending superintelligence is a question of worldview: does he see human beings are computational, or does he see them as embodied souls?
The new book Minding the Brain from Discovery Institute Press is an anthology of 25 renowned philosophers, scientists, and mathematicians who seek to address that question. Materialism shouldn’t be the only option for how we think about ourselves or the universe at large. Contributor Angus Menuge, a philosopher from Concordia University Wisconsin, writes,
Neuroscience in particular has implicitly dualist commitments, because the correlation of brain states with mental states would be a waste of time if we did not have independent evidence that these mental states existed. It would make no sense, for example, to investigate the neural correlates of pain if we did not have independent evidence of the existence of pain from the subjective experience of what it is like to be in pain. This evidence, though, is not scientific evidence: it depends on introspection (the self becomes aware of its own thoughts and experiences), which again assumes the existence of mental subjects. Further, Richard Swinburne has argued that scientific attempts to show that mental states are epiphenomenal are self-refuting, since they require that mental states reliably cause our reports of being in those states. The idea, therefore, that science has somehow shown the irrelevance of the mind to explaining behavior is seriously confused.-Angus Menuge, When Science Points Beyond the Physical | Mind Matters
The AI optimists can’t get away from the problem of consciousness. Nor can they ignore the unique capacity of human beings to reflect back on themselves and ask questions that are peripheral to their survival needs. Functions like that can’t be defined algorithmically or by a materialistic conception of the human person. To counter the idea that computers can be conscious, we must cultivate an understanding of what it means to be human. Then maybe all the technology humans create will find a more modest, realistic place in our lives.
For further reading: