Craig Mundie, former Microsoft Chief Research & Strategy Officer, was blunt in his assessment at the COSM conference on tech’s future in medicine: “Human biology is too complicated for humans. If it is too complicated for humans, is it too complicated for machines?” He says no.
He told his audience that Big Data will enable each person to be “completely understood” by Artificial General Intelligence machines that can produce a computer facsimile of each detail of a single individual. It would be far too complex for human physicians to make sense of, he said.
In his talk, he cited recent victories of machines like AlphaGo at games and of DeepMind and Open AI at crunching big data in medicine. He believes that the machines are demonstrating “emergent behavior” — that at 1.5 billion parameters, they generate their own ideas, better than human ideas, without help from humans (“100% unsupervised”).
By contrast, he said, “Humans have a finite capacity, overwhelming people’s ability to learn. They are becoming narrower and narrower specialists.”
Humans, he laughed, “spent 3000 years learning to play Go, then got smoked by a machine that learned on its own.” “We need to start to think that we are raising a polymath machine prodigy” that can reason across moral and economic domains better than we can.
Mundie told his audience that the term artificial general intelligence (AGI) was coined because the way most people use the term “artificial intelligence” (AI) is just a rebranding of everyday machine learning, like the next word that is suggested when we text. He seeks machines that equal the capacity of humans. “I believe that in the next ten years, we will build those machines.”
However, when futurist George Gilder specifically asked him if the existing machines he cited showed creativity or mere problem-solving, he admitted that it was “not creativity per se.” But he reiterated that the new machines would still be “beyond the capability of humans.”
Mathematician and philosopher Bill Dembski, speaking on cryptocurrencies at the same conference, was skeptical. He reminded Mind Matter News that these types of hand-picked demonstrations of AI intelligence have been around since the 1960s and little fundamental progress has been made in artificial general intelligence. “What about the file drawer problem?,” he asked, “With millions of examples of AI-generated text, they are bound to come up with some that seem to make sense.”
Other analysts have noted that games with strict rules, like chess and Go, are especially easy for computers to master. The difficulties begin in real-life situations like medicine where a cloud of fuzzy possibilities arises and AI has often flopped. Pomona College statistician Gary Smith went so far as to say recently, “Computers’ stupidity makes them dangerous”:
Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking. Deciding whether to accept a job offer, sell a stock, or buy a house is very different from recognizing that moving a bishop three spaces will checkmate an opponent. That is why it is perilous to trust computer programs we don’t understand to make decisions for us.
Mundie was speaking at COSM, A National Technology Summit: AI, Blockchain, Crypto, and Life After Google October 23–25, 2019, sponsored by the Walter Bradley Center for Natural and Artificial Intelligence, hosted by technology futurist George Gilder. He is currently President of Mundie & Associates and a director of the Institute for Systems Biology.
Denyse O’Leary reporting live from the COSM Technology Summit.