Artificial intelligence is an oxymoron. Despite all the incredible things computers can do, they are still not intelligent in any meaningful sense of the word. Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms Read More ›
I’ve written before about how computer algorithms are like Nigel Richards, the New Zealander who has won multiple French-language Scrabble tournaments even though he does not understand the words he is spelling. Computers can similarly manipulate words in many useful ways — e.g., spellchecking, searching, alphabetizing — without any understanding of the words they are manipulating. To know what words mean, they would have to understand the world we live in. They don’t. One example is their struggles with the Winograd schema challenge — recognizing what it refers to in a sentence. Another example is the inability to answer simple questions like, “Is it safe to walk downstairs backwards if I close my eyes?” A third type of example is the brittleness of language translation programs. Yet another Read More ›
The official Winograd Schema Challenge, organized by Levesque and friends to see if AI could learn common sense, was retired officially in 2016 for the embarrassing reason that even the well-funded bleeding age Google Brain team performed poorly on a test set of a few hundred questions.