As the year winds down, our Walter Bradley Center director Robert J. Marks interviews fellow computer nerds (our Brain Trust) Jonathan Bartlett and Eric Holloway about 12 overhyped AI concepts of the past year.
Hey, as we like to say, great stuff happened in AI this year. But well, lots of “stuff” happened too and it’s time to have some fun! So here’s #10: Replication problems tarnish the image of rapid AI progress:
#10 starts at about 12:44 A partial transcript and Show Notes follow, along with Additional Resources and the entire transcript.
Robert J. Marks: # 10, Will artificial intelligence ever live up to its hype? The subtitle to the article with that name in this month’s Scientific American is “Replication problems plague the field of AI, and the goal of general intelligence remains as elusive as ever.” Eric, what do you think?
Eric Holloway: We have this training problem where they don’t really train their models in the way that fits the real world, and they don’t really have the constraints well-defined, and they don’t really follow scientific methods and they’re not even scientific. Fundamentally, it’s kind of unsurprising that once you hit the real world, then all the hype kind of deflates. And the author of this article, he looked at I think 40 different startups, AI startups, that were originally really hyped.
They’re going to change the way the world is and everything. And after the fact, once they’ve actually started trying to use their product in reality, then all of their venture capitalists decided, “Yeah, these companies aren’t really living up to the hype.” The AI is going to be much less impactful than we originally thought.
Elon Musk made headlines in 2018 when he warned that “superintelligent” AI, much smarter than we are, represents “the single biggest existential crisis that we face.” (Really? Worse than climate change? Nuclear weapons? Psychopathic politicians? I suspect that Musk, who has invested in AI, is trying to promote the technology with his over-the-top fearmongering.)
Experts are pushing back against the hype, pointing out that many alleged advances in AI are based on flimsy evidence. Last January, for example, a team from Google Health claimed in Nature that their AI program had outperformed humans in diagnosing breast cancer. In October, a group led by Benjamin Haibe-Kains, a computational genomics researcher, criticized the Google health paper, arguing that the “lack of details of the methods and algorithm code undermines its scientific value.”John Horgan, “Will Artificial Intelligence Ever Live Up to Its Hype?” at Scientific American (December 4, 2020)
Robert J. Marks: I think there’s always been hype associated with AI. In fact, in 1957, I ran across a New York Times article, that the Navy had come up with artificial intelligence that in the future would be able to walk and talk and reproduce. This was back when Bernie Woodrow at Stanford and Frank Rosenblatt, I believe, at Cornell were doing rudimentary artificial intelligence—and the hype was there.
Note: Here’s “Behind the Giant Brains,” which offers a perspective on what was happening in 1957, the early days of computing.
Eric Holloway: Well, even at the very beginning, the field was started by Marvin Minsky and Claude Shannon and some of the luminaries of information theory. And they thought, “Let’s just get like 10 of us really smart people and give us funding for like a month and we’ll give you intelligence that can learn just like a person, do all the things just like a person.” And here we are like three or… actually eight decades after that.
Note: Marvin Lee Minsky (1927–2016) said in 1967, “Within 10 years computers won’t even keep us as pets.”
Robert J. Marks: AGI, artificial general intelligence, keeps changing names. It used to be hard intelligence, hard artificial intelligence. But there’s the assumption that it can be achieved. And I think that both you and I are on the page that there are fundamental challenges in computer science that are going to prohibit this from ever being achieved.
Note: See, for example, “Six limitations of artificial intelligence as we know it”: You’d better hope it doesn’t run your life, as Robert J. Marks explains to Larry Linenschmidt. “One of the things that we can identify that humans can do that computers can’t do are things which are non-algorithmic. If it’s non-algorithmic, it means it’s non-computable.” That’s a fundamental limit.
Eric Holloway: Yeah. The very fact that we have to differentiate the fields now actually points to the problem. Originally when Shannon and Minsky were coming up with the field, they’re like, “Oh yeah, it’s just a computation. We’ll just have a fancy algorithm and that’ll do it.” And now we’re finding all these algorithms we thought were going to be the AI turn out to only actually work in very, very small domains and very restricted data sets.
Retro: Eniac (1946) was the first digital computer:
Watch for #9 soon: The story behind Erica the Robot… hype vs. ulp!
In our countdown for the Top Twelve AI Hypes of 2020…
11: A lot of AI is as transparent as your fridge A great deal of high tech today is owned by corporations.
Lack of transparency means that people trained in computer science are often not in a position to evaluate what the technology is and isn’t doing.
12! AI is going to solve all our problems soon! While the AI industry is making real progress, so, inevitably, is hype. For example, machines that work in the lab often flunk real settings.
- 01:13 | Introducing Jonathan Bartlett
- 01:39 | Introducing Dr. Eric Holloway
- 02:00 | #12: “The way we train AI is fundamentally flawed” (MIT Technology Review)
- 09:08 | #11: “Transparency and reproducibility in artificial intelligence” (Nature)
- 12:44 | #10: “Will Artificial Intelligence Ever Live Up to Its Hype?” (Scientific American)
- 16:58 | #9: “What to make of Erica, the AI Superstar Robot?” (Mind Matters News) and “A.I. Robot Cast in Lead Role of $70M Sci-Fi Film” (The Hollywood Reporter)
- Jonathan Bartlett at Discovery.org
- Eric Holloway at Discovery.org
- #12: “The way we train AI is fundamentally flawed” (MIT Technology Review)
- #11: “Transparency and reproducibility in artificial intelligence” (Nature)
- Get George Gilder’s new book Gaming AI for FREE!
- Gaming AI: Why AI Can’t Think but Can Transform Jobs by George Gilder at Amazon
- #10: “Will Artificial Intelligence Ever Live Up to Its Hype?” (Scientific American)
- The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik Larson at Harvard University Press
- #9: “What to make of Erica, the AI Superstar Robot?” (Mind Matters News) and “A.I. Robot Cast in Lead Role of $70M Sci-Fi Film” (The Hollywood Reporter)