The September 2023 cover of The Economist features a robot sitting under an apple tree, raising a finger to some Eureka! moment, after an apple falls from the tree and hits it on the head. Anyone even remotely familiar with the history of science knows the image belongs to Isaac Newton, who gave an account of an apple falling to the ground while sitting in his garden at Woolsthorpe Manor in 1666. As he later recounted, he asked himself why the apple should fall perpendicularly to the ground, which gave rise to the idea that the very same force pulling the apple to earth kept the moon falling to the earth, and the earth to the sun. The apple, in other words, gave rise to the theory of gravity. (The apple didn’t hit him on the head. The account of the apple falling was relayed by Newton to his first biographer, William Stukeley.)
In other words, AI is the new Isaac Newton.
The Economist cover reads “How AI Can Revolutionise Science,” and is chock full of encomiums to the democratization of knowledge, confident assertions about AI speeding up the pace of scientific discovery, and tantalizing suggestions that AI might radically boost the productivity of research around the globe. Yet, for all this proclamation and praise, the examples proffered as proof are not exactly groundbreaking. Compared to the discovery of gravity, they’re downright boring. Why are the huge, important theories in our rearview, if AI — and let’s admit it, deep neural networks are at least a decade old — is such a boon to scientific discovery? The answer is, it’s not.
This is one of these points that irks me to no end because it’s an example of replacing fundamental thinking with downstream engineering processes which already presume the scientific knowledge and then go about looking for variations and new configurations of this knowledge. In other words, the point is irksome — and dangerous — because it replaces scientific thinking with data engineering, and remains stubbornly unaware of this fundamental error. To wit, The Economist cover story leads with the standard exemplar of AI-powered super science, the discovery in 2019 by MIT scientists of two antibiotics, halicin and abaucin, using an AI model to find different candidate compounds. True, the newly discovered antibiotics are useful because they can be used against antibiotic-resistant bacteria, so their discovery is not without value. But they hardly prop claims made by AI enthusiasts like Demis Hassabis, co-founder of Deep Mind, that “AI could usher in a new renaissance of discovery.” The scientific knowledge about antibiotics — and for that matter, cell biology — was already in place. What do you mean by “discovery”?
There’s a simple explanation here, a simple way to respond to the Hassabis’s of the world (and they are legion). In data science (AI), we identify “features” in some domain of knowledge, a process called feature engineering. The domain of knowledge is fixed. The feature selection or engineering process is therefore downstream of the discoveries in that particular domain of knowledge. This fact is so obvious that it’s positively befuddling why nary an AI soul understands it, or admits it. But it’s true. And it implies that AI will continue to play a subordinate role to fundamental thinking, which occurs upstream of data science and adds facts and theories to the domain of knowledge in the first place.
There’s another problem. Because neural network-based models in AI are black boxes — something goes in, something comes out — any “discoveries” made using existing scientific knowledge are unlikely to run in the opposite direction. The discovery of halicin is unlikely to lead to further, more fundamental discoveries in cell biology and cognate fields. Basically, we have two new antibiotics and science in stasis. Everyone move on. As science writer John Horgan has put it, this is replacing science with technicians — or, in other words, data scientists. Far from a renaissance of discovery, it’s reducing science to engineering, an unlikely gambit given the role science plays in engineering — as the set of facts and theories from which to pick in the first place.
We shouldn’t pick on The Economist, as it’s a popular conceit. But we do need to start asking tough questions about what happened to the role of human thinking and insight in the discovery of how the world, and we, work. The story of Newton’s apple is memorable and redolent precisely because it shows the almost incomprehensible potential of the human mind. Getting back to this insight is the real renaissance. Let’s hope it happens before it’s too late.
Originally posted at Erik’s Substack, Colligo.