When I started writing about science decades ago, artificial intelligence seemed ascendant. IEEE Spectrum, the technology magazine for which I worked, produced a special issue on how AI would transform the world. I edited an article in which computer scientist Frederick Hayes-Roth predicted that AI would soon replace experts in law, medicine, finance and other professions.John Horgan, “Will Artificial Intelligence Ever Live Up to Its Hype?” at Scientific American (December 4, 2020)
But that year, 1984, ushered in an AI winter, in which innovation stalled and funding dried up. By 1998, problems like non-recurrent engineering had begun to be recognized: “Algorithms that can perform a specialized task, like playing chess, cannot be easily adapted for other purposes.”
Today, while AI appears to be booming, Horgan says, hype frustrates critical appraisal of advances. For example, many readers may be surprised to learn this item from his recent Scientific American article:
Google Health’s claim in Nature that its AI program had outperformed professionals in diagnosing breast cancer is suspect: “In October, a group led by Benjamin Haibe-Kains, a computational genomics researcher, criticized the Google health paper, arguing that the ‘lack of details of the methods and algorithm code undermines its scientific value.’” The problem is, the details are in the code and Google won’t share the code. An article on that question from MIT’s Technology Review tells us, “AI is wrestling with a replication crisis: Tech giants dominate research but the line between real breakthrough and product showcase can be fuzzy.”
If you hadn’t heard that, you are not alone. We don’t hear much about failures, stalls, and dubious claims around AI because, generally speaking, media follow a special standard when covering it: Progress is simply assumed. Outrageous hype is forgivable. Astounding claims are not queried. Stalls and failures are minimized rather than highlighted. And the possibility that some prophesied advances may be impossible in practice because the problems are not computable is seldom entertained — possibly not even understood.
Perhaps the most interesting thing Horgan learned from Larson before The Myth of Artificial Intelligence was published is that there is “a very large mystery at the heart of intelligence, which no one currently has a clue how to solve”:
Put bluntly: all evidence suggests that human and machine intelligence are radically different. And yet the myth of inevitability persists.”
When I first started writing about science, I believed the myth of AI. One day, surely, researchers would achieve the goal of a flexible, supersmart, all-purpose artificial intelligence, like HAL. Given rapid advances in computer hardware and software, it was only a matter of time. And who was I to doubt authorities like Marvin Minsky?John Horgan, “Will Artificial Intelligence Ever Live Up to Its Hype?” at Scientific American (December 4, 2020)
My goal is making machines that can think—by understanding how people think. One reason why we find this hard to do is because our old ideas about psychology are mostly wrong. Most words we use to describe our minds (like “consciousness,” “learning,” or “memory”) are suitcase-like jumbles of different ideas. Those old ideas were formed long ago, before “computer science” appeared. It was not until the 1950s that we began to develop better ways to help think about complex processes.John Brockman, “Consciousness is a big suitcase” at The Edge
In 1995 philosopher David Chalmers coined the term “Hard Problem of consciousness” as a way of categorizing a problem that is not a “big suitcase” and defies so simple a solution as “computer science.”
Horgan meanwhile became, as he puts it, “an AI doubter.” Of Larson’s Myth, he says, “Erik Larson exposes the vast gap between the actual science underlying AI and the dramatic claims being made for it.” In his Scientific American piece, he reflects, “…our minds — in spite of enormous advances in neuroscience, genetics, cognitive science and, yes, artificial intelligence — remain as mysterious as ever.”
Actual mysteries may be fruitful if we can live with them; false solutions are not.
Note 1: Most recently, Horgan has published a book, Mind–Body Problems, which is free to read at his site.
Note 2: The photo of Marvin Minsky is courtesy Sethwoodworth at English Wikipedia. – Transferred from en.wikipedia to Commons by Mardetanha using CommonsHelper., CC BY 3.0
You may also wish to read design theorist William Dembski’s takes on Larson’s Myth:
New book massively debunks our “AI overlords”: Ain’t gonna happen AI researcher and tech entrepreneur Eric J. Larson expertly dissects the AI doomsday scenarios. Many thinkers have tried to stem the tide of hype but, as an information theorist points out, no one has done it so well.
No AI overlords?: What is Larson arguing and why does it matter? Information theorist William Dembski explains, computers can’t do some things by their very nature.