Mythic AI is Dangerous
The evidence is inconclusive (at best) that AI could ever think or interact like a human being.Computer scientist and tech entrepreneur Erik Larson recently launched a Substack account where he’ll be covering AI, tech, and human exceptionalism in the digital age. A new post from the channel discusses the dangers of “mythical AI” and how the evidence is inconclusive (at best) that AI could ever think or interact like a human being. Here’s an excerpt:
In spite of the almost religious fervor about the mental powers of AI, we have at best inconclusive evidence that AI systems will get smart like humans—the so-called general intelligence attributed to us has so far proven entirely elusive for machines. It’s an open question whether future AI can really achieve general, or human-level, intelligence. To date, we have evidence that AI systems can solve an increasing range of tasks, but they are narrowly defined, like playing a game or recognizing a face. Even the current obsession, large language models and their applications like ChatGPT, while certainly impressive, still generate word sequences culled from human-language on the web. While their output often seems generally intelligent, the systems can’t be used outside the web. They can’t, for instance, drive a car, or operate in the natural world. They’re confined to the cyber world. Are we really building something generally intelligent, like a human? It seems no.
The Perils of Mythical AI – by Erik J Larson – Colligo (substack.com)
The mythos surrounding AI is powerful and gives it an allure to those who want to think of it not just as a tool but as a center of actual consciousness.