Artificial Intelligence: Prophets in ConflictA total of 48 AI experts tell us what it all means but their predictions strongly disagree
Architects of Intelligence: The truth about AI from the people building it (2018) compiled by futurist Martin Ford (23 experts) and Possible Minds: Twenty-Five Ways of Looking at AI (2019), compiled by John Brockman (25 experts) offer a total of 45 experts foretelling our future. Some experts, Rodney Brooks (Rethink Robotics), Judea Pearl (UCLA), and Stuart Russell (UC Berkeley), were interviewed for both books, which is why the number sums to 45, not 48.
The major disagreements among contributors to both Architects of Intelligence and Possible Minds (2019) are the classic ones: Whether AI will have human-like intelligence and/or wipe us out. And yet, as a reviewer of both books notes,
Almost everyone agrees that certain questions — when general AI (that is, AI that has human-level problem-solving abilities) will happen, how it’ll be built, whether it’s dangerous, how our lives will change — are questions of critical importance, but they disagree on almost everything else, even basic definitions. Surveys show different experts estimating that we’ll arrive at general AI any time from 20 years to two centuries from now. That’s an astonishing amount of disagreement, even in a field as uncertain as this one.Kelsey Piper, “How will AI change our lives? Experts can’t agree — and that could be a problem.” at Vox
She wonders how a field can even get to this point:
At best, it feels like we’re seeing many blind men grasping at the same elephant. At worst, we’re watching them walk right into a deadly mistake, failing to take the high uncertainty and differing expectations of their coworkers as the concerning sign that we should read it as.Kelsey Piper, “How will AI change our lives? Experts can’t agree — and that could be a problem.” at Vox
She also observes that the essayists seem haunted by the specter of another “AI winter” when advances hit a ceiling and stall, perhaps for decades. Software engineer Brendan Dixon notes that “Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding.” He offers some context:
First, what caused previous AI winters? There was one straightforward reason: The technology did not work. Expert systems weren’t experts. Language translators failed to translate. Even Watson, after winning Jeopardy, failed to provide useful answers in the real-world context of medicine. When technology fails, winters come.
Nearly all of AI’s recent gains have been realized due to massive increases in data and computing power that enable old algorithms to suddenly become useful. For example, researchers first conceived neural networks—the core idea powering much machine learning and AI’s notable advances—in the late 1950s. The worries of an impending winter arise because we’re approaching the limits of what massive data combined with hordes of computers can do.Brendan Dixon, “AI Winter Is Coming” at Mind Matters News
Is there any law of nature that says we must always be able to transcend limits with new technology?
Next: What Are the “Architects of Intelligence” actually designing?
Even their polite disagreements are fairly substantial
Possible Minds?: But What If the Minds Are IMpossible? Suppose we actually can’t create thinking AI? How would THAT change the world?