Mind Matters Natural and Artificial Intelligence News and Analysis
medical-technology-concept-electronic-medical-record-stockpack-adobe-stock.jpg
Medical technology concept. Electronic medical record.

Top Tech Analyst: Human Biology Is Too Complicated for Humans

He hopes to be able to customize medicine for every person by building a virtual proxy for every person
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Recently, Jay Richards interviewed Craig Mundie former Microsoft Chief Research and Strategy Officer, who is currently president of of Mundie and Associates, which coaches CEOs on technology strategy and organizational design. The topic was what AI can do for medicine and healthcare.

Mundie’s dream is to build an AI that rivals human intellect. While many worry that AI can aid totalitarian rule, he sees himself as more pragmatic; it can, he says, be used for good or ill just like any other technology. He explained to Richards why we need AI to dominate health care:

From the interview:

Craig Mundie: I think there are a few big trends that are happening. Really big computers that are good at doing artificial intelligence type things are emerging and continuing to grow fast. We have the arrival of new molecular assays, in particular, proteomic assays in the biomedical field. And we have, we’re on the road at least, to try to build an artificial general intelligence. Each of those is interesting in their own right. But when I looked at them together, I thought it created the basis of thinking about a completely new way to move toward a future of medicine that would be completely personalized by combining the artificial general intelligence to build a model of human biology, and then use the human biologic model to figure out new ways to practice medicine, and even to regulate medicine, such that it became individualized.

Jay Richards: What do you realistically think is going to need to be modeled for this to happen?

Craig Mundie: Well, I think the problem is, you need to model all of human biology. In my opinion, human biology is too complicated for humans. It’s just too complicated. So the question is, is it possible that a machine could, in fact, understand it all, because of the machine’s ability to ingest this incredible detail, and at scale. Therefore, could it learn a model of biology that’s too complicated for humans to write down and express themselves?

Mundie sees the present accomplishments of I as “glorified machine learning”; he seeks a machine that “rivals human intellect.”

Craig Mundie: I don’t know, I don’t even need to know, whether or if we’ll get to the point where this thing has true general intellect. But I’m pretty convinced that we’re going to be able to achieve mastery of enough domains in a single machine that it will solve problems that humans can’t solve, and the one I want to solve is health.

He finds it very difficult to predict when AI will rival the human intellect but suggests 20 to 30 years. He was less certain about what the artificial intellect should tackle:

Jay Richards: What do you think will be the first tough problems that you think could be solved? A lot of people want to talk about unlimited life span and things like that. Are you thinking about discrete diseases, cancer, reduced mortality curve, something like that?

Craig Mundie: Well, I think the way I express it is, every person is different. The real question is, can we improve life outcomes for every individual in an economical way? If you do that, then I think you’ve helped everything, no matter what. I don’t have to measure it by achieving any one thing.

In the nearer term, he hopes to build a “ proteomic-based proxy for human biology, or the state of any person”:

Craig Mundie: … and that is now clearly possible. That has just happened in the last year or so. Then, I think we can do all kinds of optimizations using that thing as our simulation engine. That’ll allow us to do drug discovery. It’ll allow us to understand disease as more than problems in any one organ, but the interconnectedness of these things in the human system. From that, I think we’ll be able to improve health care, even as we know it today, and hopefully move us down the path to being able to customize it for every person.

In a talk at COSM 2019 last October, Mundie put his views on AI vs. human intellect more bluntly:

Humans, he laughed, “spent 3000 years learning to play Go, then got smoked by a machine that learned on its own.” “We need to start to think that we are raising a polymath machine prodigy” that can reason across moral and economic domains better than we can.

Denyse O’Leary, “Former Microsoft head of research: Machines will soon know better than your doctor” at Mind Matters News

Some expert attendees took issue with his approach. Futurist George Gilder asked him to clarify whether the machines which he cited as evidence of the trend toward super AI showed actual creativity or mere problem-solving. He admitted that they did not show “not creativity per se.” However, he made clear that the new machines he envisions will be “beyond the capability of humans.”

Mathematician and philosopher Bill Dembski, who was speaking on cryptocurrencies at the same conference, reminded Mind Matters News that hand-picked demonstrations of AI intelligence date back to the 1960s and that little fundamental progress has been made in artificial general intelligence, as opposed to machine learning. “What about the file drawer problem?,” he asked, “With millions of examples of AI-generated text, they are bound to come up with some that seem to make sense.”

Machines with massive power to compute all possibilities instantly can master games like chess and Go, which feature strict rules. In medicine, which is often a cloud of murky possibilities and goals that are very much a work in progress, AI has often flopped. Pomona College statistician Gary Smith has noted that “Computers’ stupidity makes them dangerous”:

Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking. Deciding whether to accept a job offer, sell a stock, or buy a house is very different from recognizing that moving a bishop three spaces will checkmate an opponent. That is why it is perilous to trust computer programs we don’t understand to make decisions for us.

Gary Smith, “Computers’ stupidity makes them dangerous” at Mind Matters News

Programmer Jonathan Bartlett commented recently that the real risk is that, as a result of flashy demonstrations, we may come to believe that AI can do things it can’t really do and give over to it responsibilities that it cannot fulfil.

The next two decades will probably tell us a lot.


You may also enjoy Jay Richards’ interviews with other tech notables:

AT&T CTO says, yes, you can live without your smart phone. And, says Andre Fuetsch, you might like what replaces it a lot better.

If self-driving cars become the norm, what will it feel like? Already, Millennials are more likely than their parents to see transportation as simply a means to an end. Jay Richards explores what we can expect in the near future with transportation analyst Bryan Mistele.

and

Valley insider Peter Thiel’s comments last year proved prophetic. China’s recent takeover of Hong Kong and the campus Cancel Culture spotlight his warnings for our culture’s future in the age of 5G. What makes Thiel unique is that he so much contradicts the Valley stereotype—in important ways— as the discussion with tech philosopher George Gilder shows.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Top Tech Analyst: Human Biology Is Too Complicated for Humans