Yesterday, we looked at two of the six assumptions that, according to George Gilder (pictured) in Gaming AI, are generally shared by those who believe that we will merge with our machines in a Singularity. These first two assumptions were: The brain is a computer and Big Data is a Big Answer.
Now here are two more:
• “The Binary Reality Assumption: Reliable links exist between maps and territories, computational symbols, and their objects.” (p. 50)
Basically, this assumption means that reality will somehow always phone the computer’s map to say that things have changed. That assumption underlies the push for self-driving cars that can handle any terrain (Level 5). The machine is to handle all possibilities, including remote, unmapped ones. Unfortunately, wrong guesses can mean fatalities, for example: “The Autopilot system likely misinterpreted the truck as an overpass and assumed it was safe to pass under. It was not.”
As Jonathan Bartlett, who works on an autonomous vehicle project, points out, in the real world, Level 4 self-driving, where the car works within a fully mapped environment, is the one Daimler, Waymo, and GM are buying into—and making gains with. The car is not expected to think like a human but it doesn’t need to if its map is a reliable one.
When it comes to a world more complex than cars, the gap between map and reality widens. That’s especially likely to be true of the human brain. But it turns out to be true even of a worm’s brain. Gilder recalls a personal conversation with a worm genome mapper,
Only one biological connectome has been mapped in detail. That is the nervous system of a nematode, the millimeter-long roundworm Caenorhabditis elegans, which comprises 300 neurons linked by 7,000 connections. Defining this connectome took ten years. Engaged in nematode neural research for four decades, from the Nobel labs of Sydney Brenner to his own explorations at the University of Wisconsin, Anthony Stretton sardonically observed: “And knowing the connectome does not answer the question of how the nematode brain actually works. In many ways, I ‘knew’ more about the nematode brain when I started than I do now.”George Gilder, Gaming AI (p. 34)
Of course Stretton and other scientists have learned a great deal about C. elegans but, just as the map is not the territory, the research is not the subject. The division between information about a thing and the thing itself is fundamental so 1) the thing can change from the mapped information and 2) more information leads to more questions, not fewer.
And now for the fourth assumption (of the six):
• “The Ergodicity Assumption: In the world, the same inputs always produce the same outputs” (p. 50)
Ergodicity? Gilder points out that the assumptions behind AI systems are like those of the game Go, which AlphaGo conquered:
AI assumes reality is determinist, capturable by big data, binary, ergodic, local, and orderly in time, with the future shaped by the cumulative moves of the past. AI assumes the objects in the universe are accurately rendered in the symbols in the machine. They assume no gap or necessary interpretant between symbols and objects. AI always works on an objective digital clock one step at a time. AI assumes congruence of maps and territories. The AI universe is discrete, digital, and monotonically encodable in the symbol system of the program.George Gilder, Gaming AI (p. 51)
That’s ergodicity. But the real world, left to itself, is not like that, as anyone who has managed a classroom, a lunch counter, or a henhouse will realize.
We can create a schedule for input and output (our map) that enables us to work with the reality much more effectively. But the reality is in no way obliged to follow our schedule and will tend to diverge from it over time.
That’s one reason that AI has had more mixed results in medicine than in Go: Diseases don’t have to follow rules.
We’ll look at the last two assumptions next time out, the Locality assumption and the Digital Time assumption.
Note: Gaming AI is free for download here.
Here’s our discussion of the first two assumptions: The brain is not a computer and Big Data is not a Big Answer. These claims are mere tales from the AI apocalypse, as George Gilder tells it, in Gaming AI. A neuroscientist and an economics prof show that the claims arise from misunderstandings of how brains and computers work.
You may also enjoy: Can we understand the brain the way we understand New York City? The “connectome” (a complete “wiring diagram” of the brain) is giving neuroscientists pause for thought