Does artificial intelligence “share our natural ability to make numeric snap judgments”?
Researchers observed this knack for numbers in a computer model composed of virtual brain cells, or neurons, called an artificial neural network. After being trained merely to identify objects in images — a common task for AI — the network developed virtual neurons that respond to specific quantities. These artificial neurons are reminiscent of the “number neurons” thought to give humans, birds, bees and other creatures the innate ability to estimate the number of items in a set (SN: 7/7/18, p. 7). This intuition is known as number sense.Maria Temming, “A new AI acquired humanlike ‘number sense’ on its own” at ScienceNews
A team led by neurobiologist Andreas Nieder of the University of Tübingen in Germany taught an artificial neural network to identify animals and vehicles using 1.2 million labeled images. The team then used patterns of one to 30 dots to record how the virtual neurons responded.
Neurons varied by activity: “For instance, some neurons activated strongly when shown two dots but not 20, and vice versa.” (Maria Temming, ScienceNews) The extent of the preferences was “nearly identical” to patterns observed in monkey neurons. More neurons could identify smaller numbers than larger ones. The researchers hope that their artificial network will help us understand how number sense may be “wired in” to young humans and animals, apart from being taught to count. Ivilin Stoianov, a computational neuroscientist at the Italian National Research Council in Padova, remains unconvinced:
This AI learned to “see” by studying many labeled pictures, which is not how babies and wild animals learn to make sense of the world. Future experiments could explore whether similar number neurons emerge in AI systems that more closely mimic how biological brains learn, like those that use reinforcement learning, Stoianov says.Maria Temming, “A new AI acquired humanlike ‘number sense’ on its own” at ScienceNews
Asked to comment, AI analyst Jonathan Bartlett told Mind Matters News,
They seem to be reaching really hard. It seems like it was the researchers who imposed “number sense” on the unit. What the researchers did was merely compare the firing of neurons to what number it corresponds to. The visual system did not do that. In fact, their test didn’t even look at other types of numbered items (number of cards, number of coins, number of people, *only* number of dots). So, to say that it had a “number sense” may just mean that the researchers found a way to game their own neural system to find objects for which it would fire up in a similar way as monkeys (also, the similarity is not that similar – number of neurons firing is hardly a worthwhile test of what is going on in the brain).
So, the short form:
1) They trained a neural network on visual object display 2) The researchers found a method of representing numbers so that the *number* of nodes on the NN matched the number of neurons in monkey brains (actually, it wasn’t even the same number, just the same relative number compared to others) 3) It wasn’t even exact 4) They published a paper, calling this “number sense”, because what the world needs now is overhyped claims about AI.
It is “number sense” not in the sense of the computer having a feel for seeing numbers, but in the humans having a feel for when the computer is seeing numbers.
Also interestingly, the study they linked to about human and animal performance, was about human performance on a task where the humans weren’t told what the task was.
Software engineer Brendan Dixon added,
Sigh. We already know from previous work that trained networks develop hotspots; it’s those hotspots that, say, adversarial images leverage to trick the system. Who knows how those hotspots relate to the numeric patterns shown? If you construct an image of just the hotspots, the trained network may recognize it.
That these two systems, a monkey’s brain and a computer “neural” network, which are so wildly different as to be laughable, behaved the “same,” appears to me a fluke. This feels like someone painting a target around where the arrow landed.
On AI’s “snap judgments,” neurosurgeon Michael Egnor told MMN, “AI makes no ‘judgments,’ ‘snap’ or otherwise. We are the intelligence in ‘AI,’ and we make judgments. Sometimes we use computers to represent our own thoughts, and sometimes we don’t understand that that is what we are doing.”
He added, “Computation is not thought. It can be used to represent our thought.”
The story may remind readers of the recent claim that wasps can reason. The researchers had denied the claim but popular science media insisted on it anyway.
From the popular science media reporting on this story on AI number sense: “Number sense” arises from the recognition of visible objects (May 9, 2019); AI develops human-like number sense – taking us a step closer to building machines with general intelligence (The Conversation, May 10, 2019); An AI for Image Recognition Spontaneously Gained a ‘Number Sense’ (SingularityHub, May 20, 2019). And counting.
Note: The paper is open access: K. Nasr, P. Viswanathan and A. Nieder. Number detectors spontaneously emerge in a deep neural network designed for visual object recognition. Science Advances. Published online May 8, 2019. doi: 10.1126/sciadv.aav7903.
See also: Wasps can reason? Science media say yes, researchers no. Media stories explicitly claim that wasps use logical reasoning, which researchers disavow
Did a fish just show self-awareness? What if the whole question is founded on a mistake about the nature of the mirror test?