Has Aristo broken bounds for thinking computers?The Grade 8 graduate improves on Watson but we must still think for ourselves at school. Here’s why
Researchers at the Allen Institute for Artificial Intelligence (AI2) in Seattle have created a computer, Aristo, that can do what many of us cannot: Pass an 8th-grade science exam.
The New York Times, as might be expected, gushes over Aristo’s success:
A science test isn’t something that can be mastered just by learning rules. It requires making connections using logic.Cade Metz, “A Breakthrough for A.I. Technology: Passing an 8th-Grade Science Test” at New York Times
The claim is qualified somewhat when we hear that “enthusiasm… is still tempered” among some scientists, such as Microsoft researcher Jingjing Liu, told him, “We can’t compare this technology to real human students and their ability to reason.”
Liu is correct. Aristo works in a manner similar to, but more advanced than, IBM’s original Watson. Dr. Peter Clark, the Lead Researcher on the project, describes how Aristo functions in a question-and-answer format:
Aristo contains several different modules, that we call “solvers,” that try to answer science questions in different ways. For example, one solver looks to see if an answer is written down somewhere in a large amount of text. Another tries to answer questions that require reasoning, by combining two pieces of information together. For example it can realize that “an iron nail conducts electricity” because it knows that “iron is a metal” and “metals conduct electricity.” Another is a specialist solver that answers questions about comparisons; for example “would a rougher surface have more or less friction than a smooth surface?” And so on. Finally, a special module combines all the different answers together to decide on the overall best answer.“How to tutor AI from an ‘F’ to an ‘A’” at Paul Allen
Watson—which won at Jeopardy—also deciphered questions that it passed along to solvers (of sorts) and selected the final answer based on probability of a solver being correct. Watson had an easier task than Aristo because researchers typed in the questions and the Jeopardy format simplified inferring answers.
But, like Watson, Aristo has limits: It passed only the multiple-choice portion of the test and, even then, only those questions that did not require understanding a diagram.
A close look at biology, setting aside our initial assumptions, compels an appreciation for the complexity of life. Life is so much more than mere blobs of protoplasm.
The same is—or should be—true with advances in AI. If we step back from the science fiction fantasy promises, we will appreciate the human mind all the more. As Dr. Peter Clark says:
And although the AI field is moving forward quickly, Aristo has given me a new appreciation of just how sophisticated human reasoning is and how far away computers are from matching the full range of skills that a person has.”“How to tutor AI from an ‘F’ to an ‘A’” at Paul Allen
Further reading on the many entertaining Jeopardies and other adventures of Watson:
Why was IBM Watson a flop in medicine? Robert J. Marks and Gary S. Smith discuss how the AI couldn’t identify which pieces of information in the tsunami actually MATTERED. Last year, the IBM Health Initiative laid off a number of people, seemingly due to market disillusionment with the product.
Why did Watson think Toronto was in the U.S.A.? How that happened tells us a lot about what AI can and can’t do, to this day. Strictly speaking, the answer Watson spit out was “What is Toronto?????”, which does sound distinctly less than certain. But the programmers had chosen not to program in the option of saying, “I don’t know.”
Why an AI pioneer thinks Watson is a “fraud”. The famous Jeopardy contest in 2011 worked around the fact that Watson could not grasp the meaning of anything. Gary N. Smith explains that a computer’s inability to understand what “it” means in a sentence is because it doesn’t understand what any of the words in the sentence mean.