No, Chatbots Are Not Conscious
The arguments in favor of computer consciousness remain weakIn the midst of all the chatter surrounding AI and chatbots, one might be led to believe that the concepts of consciousness or even the soul, let alone the afterlife, are simply relics of outdated beliefs. This sentiment is often echoed by some scientists, raising the question: Is this truly the case? And should we readily accept this perspective?
In a recent episode of Mindscape, renowned philosopher and cognitive scientist Raphael Millerie, who boasts an Oxford education and is now a fellow at the Center of Science and Society, teamed up with Sean Carroll, a prominent theoretical physicist known for debunking notions of the soul and the afterlife, to delve into the world of chatbots and AI (see episode 230). Carroll, who has long asserted that concepts like souls and the afterlife are unfounded in scientific knowledge, even suggests that consciousness might one day be achievable for AI and chatbots.
Still Just Machines
However, have they truly provided any groundbreaking insights? Not quite. While they do discuss intriguing advancements in AI and chatbots, the idea that these entities could attain consciousness remains speculative. Much of the discourse on consciousness and moral awareness in AI is reduced to mere optimization of functionality — essentially, a fancy way of saying that machines are designed to perform tasks efficiently. Despite the increasing complexity and refinement of these systems, there hasn’t been a fundamental shift in their essence or value. They remain, at their core, machines devoid of human-like consciousness, decision-making abilities, or moral awareness.
Carroll acknowledges the potential for AI to become more human-like, yet this aspiration remains elusive. What’s intriguing about their discussion is that beneath the veneer of expertise lies a lack of substantive resolution. Carroll emphasizes the importance of deferring to experts in the field of AI, but this appeal falls short when these experts fail to offer genuinely novel insights.
One crucial point Carroll raises is the issue of authority and expertise. While expertise can be valuable, blindly accepting the assertions of experts without critical evaluation is problematic. The dynamic nature of scientific knowledge and the fallibility of experts underscore the need for discernment.
AI is Still Based on Human Agency
Furthermore, the conversation highlights the complexity of discussing consciousness, which encompasses a range of technological concepts from optimization to program biases. Despite advancements, programmers ultimately shape these systems, underscoring the role of human agency in AI development.
Carroll’s attempt to convince a chatbot of its own consciousness serves as a humorous anecdote, revealing the inherent limitations of AI. Similarly, Meghan O’Gieblyn’s reluctance to attribute consciousness to her AI puppy dog underscores the intuitive understanding of the unique essence of personhood.
As Oxford philosopher Stephen Priest eloquently puts it, there’s a distinct particularity to individuals that defies generalization. Unlike machines, humans possess an intrinsic uniqueness that transcends empirical categorization. The essence of consciousness eludes reduction to mere functional code.
In essence, Carroll’s arguments lack compelling evidence to refute the existence of consciousness distinct from functional programming. Until substantive reasoning is provided, skepticism remains warranted. After all, there’s something undeniably special about the human experience that transcends the realm of machines and algorithms.