David C. Krakauer (right), Professor of Complex Systems at the Santa Fe Institute, poses a question: “Science today stands at a crossroads: will its progress be driven by human minds or by the machines that we’ve created? ”
A schism is emerging in the scientific enterprise. On the one side is the human mind, the source of every story, theory and explanation that our species holds dear. On the other stand the machines, whose algorithms possess astonishing predictive power but whose inner workings remain radically opaque to human observers. As we humans strive to understand the fundamental nature of the world, our machines churn out measurable, practical predictions that seem to extend beyond the limits of thought. While understanding might satisfy our curiosity, with its narratives about cause and effect, prediction satisfies our desires, mapping these mechanisms on to reality. We now face a choice about which kind of knowledge matters more – as well as the question of whether one stands in the way of scientific progress.David C. Krakauer, “At the limits of thought” at Aeon
Dr. Krakauer’s author bio at Aeon tells readers that he “works on the evolution of intelligence and stupidity on Earth. Whereas the first is admired but rare, the second is feared but common.” He is also the founder of SFI’s InterPlanetary Project:
What terrestrial challenge could provide greater focus than nourishing and uniting the peoples of the Earth to explore the extrasolar planets? And what great solutions might we find by searching beyond the confines of our earthly troubles?
To search through outer space we shall need to rise above our inner spaces, the gravest challenges of our time — from reducing disease and economic inequality, to managing finite resources and surviving war — and to take all necessary steps towards a larger, shared goal: an understanding of life’s place in the universe. Because confronting the challenges of space requires braving and solving the complexities of life.
All that gives some sense of where he is coming from. He goes on to tell us that the gap between our powers and those of machines “ threatens to blow the whole scientific project wide open”:
But in an age of ‘big data’, the link between understanding and prediction no longer holds true. Modern science has made startling progress in explaining the low-hanging fruit of atoms, light and forces. We are now trying to come to terms with the more complex world – from cells to tissues, brains to cognitive biases, markets to climates. Novel algorithms allow us to forecast some features of the behaviour of these adaptive systems that learn and evolve, while instruments gather unprecedented amounts of information about them. And while these statistical models and predictions often get things right, it’s nearly impossible for us to reconstruct how they did it. Instrumental intelligence, typically a machine intelligence, is not only resistant but sometimes actively hostile to reason. Studies of genomic data, for example, can capture hundreds of parameters – patient, cell-type, condition, gene, gene location and more – and link the origin of diseases to thousands of potentially important factors. But these ‘high-dimensional’ data-sets and the predictions they provide defy our best ability to interpret them.David C. Krakauer, “At the limits of thought” at Aeon
The basic problem is that accepting on faith what we can’t ever hope to understand is not a traditional stance of science. Thus it’s a good question whether science could survive such a transition and still be recognizable to scientists. So the real bargain would be: Give up science, driven by human creativity, for greater technological power created by machines.
But does turning things over to incomprehensible algorithms, content with ignorance, really work anyway? Current results from a variety of areas give pause for thought. For example,
- What about a justice system where AI determines the verdict and the sentence? Some claim it would be fairer. But what if no one understands the results? How would we even know it was fairer?
We’re right to be suspicious. Where did the machine get the data? Left to itself, a machine is inherently unthinking so no machine can be better than its underlying data, which may include implicit biases. One software manager has said, “I would have fired any of my hiring managers who demonstrated characteristic AI traits immediately.”
It comes down to this: People can’t rationally perceive decisions as fair if they seem unfair and harmful—and worse, no one can even explain them. Hardly a recipe for social peace.
- Just when the COVID-19 crisis gave boosters a chance to prove machine learning’s worth in medical research, the big insights haven’t happened. In a recent post, Walter Bradley Center director Robert J. Marks asked, “Is data mining failing its first big test?” Computers scanning thousands of medical papers don’t seem to be providing answers for COVID-19. Dr. Marks emphasizes that machine learning will be very helpful indeed in medicine—when expectations match reality. But it is just a tool, not unknown magic.
- And what about medical treatment? IBM Watson, an ace at games, has just not lived up to its billing in medicine. While it could easily tumble through vast masses of literature, it couldn’t identify which information in the tsunami of letters and numbers actually matters. That’s a much more complex task than looking up information faster than humans do.
Incidentally, the recent claim that AI can diagnose medical images better than doctors has come under fire. The British Medical Journal identified a serious problem with the studies: Only two relied on randomized clinical trials while 81 depended on non-randomized trials. In randomized trials, researchers don’t know what data they will be assigned. That reduces the ability to manipulate the outcome, consciously or otherwise. Maybe AI will prove more reliable than doctors but we had better be more sure before we come to depend on it.
- Remember when autonomous, self-driving cars were just around the corner? While machine learning took them rapidly from the lab to the roads, the underlying technology “remains brittle” and practical uses may be a decade away. And when deaths occur, the National Transportation Safety Board will not likely be satisfied to learn that the technology is incomprehensible to its developers. Quite the opposite, in the wake of a recent fatality, it is seeking uniform standards for the previously voluntary information provided by carmakers. It may turn out that artificial judgment has inherent limitations, except under closely supervised conditions such as roads adapted for self-driving cars.
In short, there is no overwhelming case for accepting algorithms over reason in many areas of life. Why does Krakauer think science is different? He ends his essay with a quote from an essay by Argentine writer Jorge Luis Borges (1899–1986), “History of the Echoes of a Name” (1955), “Isolated in time and space, a god, a dream, and a man who is insane and aware of the fact repeat an obscure statement. Those words, and their two echoes, are the subject of these pages.”
And Krakauer responds,
Let’s say that the god is the Universe, the dream our desire to understand, and the machines are the insane man, all repeating their obscure statements. Taken together, their words and echoes are the system of our scientific enquiry. It is the challenge of the 21st century to integrate the sciences of complexity with machine learning and artificial intelligence. The most successful forms of future knowledge will be those that harmonise the human dream of understanding with the increasingly obscure echoes of the machines.David C. Krakauer, “At the limits of thought” at Aeon
We would not know if we had really learned anything from the insane man. So no one really understands. Some have prophesied better futures for science than this.