Artificial intelligence systems perform well when problems are sharply defined in advance. That’s why they can win at chess and Go. However, most problems an animal faces are not sharply defined. A wolf pack must feed itself in a vast wilderness; there is a great deal of information but most of it is imprecise. There are immutable laws of nature that the wolves obey without understanding. But no rules of play govern their quest for survival. Thus, intelligence means something different to a wolf than to a programmer.
Researchers from the Leverhulme Centre for the Future of Intelligence in Cambridge and GoodAI, based in Prague want to explore those differences by sponsoring the Animal–AI Olympics this summer. They hope to train AI to use common-sense reasoning in virtual mazes like those used for animals, ultimately up to the level of intelligence shown by many animals:
AI has made significant progress in recent years, reaching superhuman performance on a wide range of tasks. Humans are no longer the best Go players, quiz-show contestants, or even, in some respects, the best doctors. Yet state-of-the art AI cannot compete with simple animals at adapting to unexpected changes in the environment. This competition pits our best AI approaches against the animal kingdom to determine if the great successes of AI are now ready to compete with the great successes of evolution at their own game. (from their website)
It’s all part of the interdisciplinary Kinds of Intelligence project which looks at the similarities and differences between the ways humans, animals, and machines think. Their purpose is to develop artificial general intelligence:
Usually, AI benchmarks involve mastering a single task, like beating a grandmaster in Go or figuring out how learn a video game from scratch. AI has been extraordinarily successful in such realms. But when you apply the same AI systems to a totally different task, they are generally hopeless. That is why, in the Animal-AI Olympics, the same agent will be subjected to 100 previously unseen tasks. What is being tested is not a particular type of intelligence but the ability for a single agent to adapt to diverse environments. This would demonstrate a limited form of generalized intelligence — a type of common sense that AI will need if it is ever to succeed in our homes or in our daily lives. The competition organizers accept that none of the AI systems will be able to adapt perfectly to every circumstance or post a perfect score. But they hope that the best systems will be able to adapt to tackle the different problems they face.Oscar Schwartz, “Is AI as Smart as a Chimp or a Lab Rat? The Animal-AI Olympics Is Going to Find Out.” at Medium?MIT Technology Review
As the Prague team explains at its website, “Our mission is to develop general artificial intelligence – as fast as possible – to help humanity and understand the universe.” The $10,000+ prize seems minimal but a successful team would doubtless have other rewards in store.
The project must tackle the thorny problem of how we understand animal intelligence. Measurements are contested. For example, if human intelligence is used as a benchmark, it may be irrelevant. As J. Scott Turner, known for research on termite mounds, has pointed out, a mound’s inhabitants can be seen as a “giant crawling brain,” which makes the type of intelligence difficult to compare with human intelligence, which is intrinsically individual.
A recent study of cats’ ability to recognize their names seems to confound the ability to recognize a signal (which they can) with the ability to recognize an abstraction (which they can’t). Leaving abstractions aside, there is also no “tree of intelligence”: On performance tests, crows can be as smart as apes and dogs are not as intelligent as seals, according to recent research. And even lizards can be smart. Octopuses are unusually smart and they are not even vertebrates. It’s not clear what drives intelligence in each case. Also, how do we define intelligence? Seeking to thrive and grow, plants communicate extensively, without a mind or a brain. Does that count?
Also, the further an animal is from its natural environment, the less meaningful the test results may be. What humans can coax from captive animals via sophisticated techniques may not be a useful guide to their normal behavior. And even then, sometimes we aren’t measuring what we think we are. When fish, not otherwise noted for thinking skills, recently passed the mirror self-recognition test, developed for apes, researches began to suspect that the test does not really measure self-awareness.
There may also be factors in the intelligence of life forms that we have not captured in AI. For example, even bacteria face choices and make decisions. Even an amoeba or a fruit fly is smarter, about some things, than your computer. These abilities in life forms that lack a brain may involve factors we do not yet fully understand, if only because AI is modeled in part on human thought processes.
No doubt, the Animal–AI Olympics will be most informative for AI. But relating it to present-day animal intelligence studies may prove a challenge.
Note: The business about AI being better than “in some respects, the best doctors,” as quoted above, should be treated with caution. See Why AI won’t replace your doctor.
See also: Even Bacteria Are Purpose-Driven The recent finding that bacteria can make individual decisions may help design better antibiotics