Will AI ever reach a point where it can independently analyze, think, and transcend itself? Furthermore, will it ever become sentient? Some experts in the field, like Blake Lemoine, think AI has already breached the boundaries of sentience. Others, though, remain skeptical.
A new article from The Conversation discusses ChatGPT, the Turing test, and the question of consciousness. For Turing, “thinking” meant simply passing the test, but his assumptions were misguided. Philip Goff writes,
Turing was wrong, however, when he said the only clear notion of “understanding” is the purely behavioural one of passing his test. Although this way of thinking now dominates cognitive science, there is also a clear, everyday notion of “understanding” that’s tied to consciousness. To understand in this sense is to consciously grasp some truth about reality.-Philip Goff, ChatGPT can’t think – consciousness is something entirely different to today’s AI (theconversation.com)
Goff goes on to state that while ChatGPT can generate words, it has no way of understanding their meaning. This is something our frequent contributor Gary Smith has written repeatedly on in recent months. Sure, Large Language Models are impressive, but they’re often wrong, especially with more complex material like legal work, and furthermore, can’t reflect on the world a way a personal consciousness can. Goff continues,
LLMs aren’t intelligent. They are systems trained to give the outward appearance of human intelligence. Scary, but not that scary.
For further reading: