Machines Can’t Teach Us How To LearnA recent study used computer simulations to test the “small mistakes” rule in human learning
I am a perpetual student. Often, I am reading multiple books on a range of topics. My podcast queue is always full. And I am trying (sigh) to pick up a new language.
So a recent headline intrigued me: “This may be the fastest way to learn something new, according to science.”
The piece started out strong, reviewing a recent open-access study in Nature Communications by Robert Wilson, a researcher at The University of Arizona. His results seemed to align with the folk wisdom I have heard: You learn most effectively when you are making a small number of mistakes. Dr. Wilson tightened that “small” to a precise “15%.”
So far, so good. Or, so I thought. Then I read this:
To determine this learning sweet-spot, the researchers tested how the difficulty of training impacts the rate of learning across a range of learning algorithms, artificial neural networks, and computer models thought to simulate learning in humans and animals.Alexandra Pattillo, “This may be the fastest way to learn something new, according to science,” at Inverse
So rather than test real learning with real people; Dr. Wilson and his associates tested computer algorithms thought to simulate human learning. He says in the study’s Abstract:
Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this ‘Eighty Five Percent Rule’ for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning.
What?? We’re looking at machines mimicking humans to learn how humans learn? Testing with real humans is hard, expensive, and subjectively squishy. But that’s no excuse to extrapolate from how a machine learns to how a human learns.
Machine Learning is poorly named. A machine’s “learning” is not how humans learn. Most machine learning algorithms work by tweaking variables in the program. For example, Deep Learning modifies the “weights” affecting how the nodes in the network interact.
Machine learning frequently requires millions of examples. Humans learn from a few examples.
Machines fail quickly outside of the trained domain. Humans, once they’ve learned something, can apply that knowledge, by extension, to new topics.
How a human learns is, frankly, a mystery. We can objectively describe the process, but we cannot describe what occurs. It is irresponsible to infer that limits or methods effective in training machine algorithms apply immediately to humans because we have no idea of the correlation, if any, between machine and human learning.
Unfortunately, studies such as Dr. Wilson’s result from failing to see humans for all that we are, believing more in the toy than the toymaker.
One thing I’ve learned: Humans are much more fascinating than our machines.
If you enjoyed this piece, you may also enjoy these by Brendan Dixon:
Just a light frost? Or AI Winter? It’s nice to be right once in a while—check out the evidence for yourself
I am giving up cycling It’s just not worth it if a machine can beat me 😉