Mind Matters Natural and Artificial Intelligence News and Analysis
young-woman-cycling-in-the-park-at-sunset-stockpack-adobe-stock
Young woman cycling in the park at sunset
Image licensed via Adobe Stock

Computers May Know “How” but They Still Don’t Know “Why”

Computers will not equal, let alone surpass, human intelligence.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Several years ago, a visiting professor taught an introductory statistics class at Pomona College, where I have taught for more than 40 years. When I asked how the class was going, he said, “At my university students ask how; here they ask why.” Knowing how to do a statistical test is important but more important is knowing why a specific test might be appropriate.

During a recent Saturday morning bike ride, I got to thinking about the way that this distinction relates to the limitations of current AI systems. Take something as simple as peddling a bicycle. A robotic bike rider equipped with a powerful AI system might be programmed to peddle or it might be trained to peddle by randomly fiddling with different parts of the bike but it would have no idea why peddling moves the bike forward.

Similarly, I don’t wear bike clips but I know why they can be helpful. A robotic bike rider might be given clips or learn through training that clips are useful but it would have no understanding of why they are helpful — or why they can be dangerous.

The more I thought about it, the more examples I thought of. For example, the route I took has long flat stretches interrupted by several steep hills so I shifted gears frequently and I understood why different gears are useful, as did John Kemp Starley, the man who invented bike gears. A robotic bike rider might know how to shift gears but not why — and would not invent bike gears.

Robots and Cross-traffic

At one point in my ride, I had to cross the traffic exit from a strip mall and I saw a car waiting to make a right turn in front of me. I saw the driver checking traffic to the left and I stopped my bike because I feared that the driver would pull out without checking to the right, where I was. Sure enough, when there was no traffic to the left, the driver stepped on the gas and sped onto the city street, oblivious to my presence. Knowing that this sometimes happens, I always wait to see the driver’s eyes before riding in front of a car.

What would a robotic bike rider do? With enough training (and collisions with cars) it might learn to be cautious around cars in this situation. However, it would not know why it should be cautious or why it is safer to check the driver’s eyes.

The question of why is closely related to the notion of causality, which allows humans to understand why things happen, to anticipate things that will happen, and to make plans to cause things to happen.  As Judea Pearl and Dana Mackenzie have written,

Some tens of thousands of years ago, humans began to realize that certain things cause other things and that tinkering with the former can change the latter….From this discovery came organized societies, then towns and cities, and eventually the science- and technology-based civilization we enjoy today.

A wonderful example of the distinction between how and why happened in the 1600s when Sweden’s King Gustav II placed orders for four military ships from the Hybertsson shipyards in Stockholm. At the time there was little understanding of why certain ship designs were more stable than others. There was only how: shipbuilders learned what worked and what didn’t work by seeing which ships were seaworthy and which sank.

Henrik Hybertsson was the master shipwright at the Hybertsson shipyards and he started construction of the Vasa, a traditional 108-foot ship with one gun deck armed with small cannons, without detailed plans or even a rough sketch, since he and the builders were very familiar with how such ships were built.

When Gustov learned that Denmark was building a ship with two gun decks, he ordered that a second gun deck be added to the Vasa, and that the length of the boat’s keel be extended to 135 feet. Hybertsson had never built such a ship, but he assumed it should be a simple extrapolation of a 108-foot ship with one gun deck.

Instead of thirty-two 24-pound guns on one deck, Vasa carried forty-eight 24-pound guns, twenty four on each deck, which raised the ship’s center of gravity. The King also ordered the addition of hundreds of gilded and painted oak carvings high on the ship, where enemy soldiers could see them; these, too, raised the center of gravity. You know where this story (and this ship) is heading.

When the Vasa was launched in 1628, in Stockholm harbor, the wind was so light that the crew had to pull the sails out by hand, hoping to catch enough of a breeze to propel the boat forward. Twenty minutes later, 1,300 feet from shore, a sudden 8-knot gust of wind toppled the boat and it sank.

The wood survived underwater because of icy, oxygen-poor water in the Baltic Sea, and 333 years later, in 1961, the Vasa was pulled from the ocean floor. After being treated with a preservative for 17 years, it is now displayed in its own museum in Stockholm.

We now know why some boats are stable and others are not; specifically, we know that taking a boat design that works for one boat size and scaling up, say by increasing all the dimensions by 25 percent, can end disastrously.

Why the “Why” Matters

In the same way, as I have written many times, the Achilles heel of current AI systems is that they do not understand the meaning of the data they input and output and therefore have no way of assessing whether the statistical patterns they discover are causal or coincidental. They do not understand why and are consequently prone to make decisions with real consequences based on fleeting happenstance.

A few weeks ago, Blaise Agüera y Arcas and Peter Norvig, two prominent AI enthusiasts, published a paper with the audacious title, “Artificial General Intelligence Is Already Here.” My retort continues to be that the real danger today is not that computers are smarter than us but that we think computers are smarter than us and consequently trust them to do things they should not be trusted to do.

Computers will not equal, let alone surpass, human intelligence until they move beyond how to why by appraising causality.


Gary N. Smith

Senior Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Gary N. Smith is the Fletcher Jones Professor of Economics at Pomona College. His research on financial markets statistical reasoning, and artificial intelligence, often involves stock market anomalies, statistical fallacies, and the misuse of data have been widely cited. He is the author of dozens of research articles and 16 books, most recently, The Power of Modern Value Investing: Beyond Indexing, Algos, and Alpha, co-authored with Margaret Smith (Palgrave Macmillan, 2023).

Computers May Know “How” but They Still Don’t Know “Why”