Paul Allen, a co-founder of Microsoft, was concerned AI had no common sense. In early 2018, Allen said “AI still lacks what most 10-year-olds possess: ordinary common sense.” He continued, “If we want AI to approach human abilities and have the broadest possible impact in research, medicine and business, we need to fundamentally advance AI’s common sense abilities.” Billionaire Allen coughed up $125 million and founded the Allen Institute for Artificial Intelligence in Seattle.
I believed that AI would never simulate common sense but always left the door open. Unlike understanding, creativity and sentience, common sense could possibly be computable. There was no indication that common sense was non-algorithmic. And now AI has simulated common sense.
The classic test for AI common sense is resolution of Winograd schema. Winograd schema contain vague, ambiguous pronouns. Common sense resolves the ambiguity. An example is:
“John is afraid to get in a fight with Bob because he is so tall, muscular and ill-tempered.”
Does the vague pronoun “he” refer to John or Bob? Common sense says Bob is the tough guy and John is the scared dude. Another Winograd schema example is
“John did not ask Bob to join him in prayer because he was an atheist.”
Common sense says that Bob was the atheist. Solving Winograd schema requires common sense.
Can AI parse these and other Winograd schema to identify the person behind the vague pronoun? Until recently, Winograd schema AI contests resulted in a per cent accuracy not much better than a coin flip. But AI innovators, led by OpenAI’s amazing GPT3, program are scoring upwards of 90% accuracy. In testing, care was taken to avoid Winograd schema whose resolution could be googled. These results are remarkable.
So AI can now simulate common sense. Does this mean that AI has common sense? No. Common sense requires understanding. Understanding is not computable.
Computers can add the numbers 43 and 13 but do not understand what these numbers mean. Likewise, GPT3 can resolve Winograd schema ambiguities but understands neither the underlying ambiguity nor its resolution. AI can simulate common sense, but will never have common sense.
You may also wish to read: What did the computer learn in the Chinese Room? Nothing. Computers don’t “understand” things and they can’t handle ambiguity, says Robert J. Marks. Larry L. Linenschmidt interviews Robert J. Marks on the difference between performing a task and understanding the task, as explained in philosopher John Searle’s famous “Chinese Room” thought experiment.