
TagErnest Davis


If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?
With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do theyIn yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.” Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out? It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain Read More ›

The AI Illusion – State-of-the-Art Chatbots Aren’t What They Seem
GPT-3 is very much like a performance by a good magicianArtificial intelligence is an oxymoron. Despite all the incredible things computers can do, they are still not intelligent in any meaningful sense of the word. Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms Read More ›

The Great American Novel Will Not be Written by a Computer
It takes more than statistical genius to understand words and create works of artI’ve written before about how computer algorithms are like Nigel Richards, the New Zealander who has won multiple French-language Scrabble tournaments even though he does not understand the words he is spelling. Computers can similarly manipulate words in many useful ways — e.g., spellchecking, searching, alphabetizing — without any understanding of the words they are manipulating. To know what words mean, they would have to understand the world we live in. They don’t. One example is their struggles with the Winograd schema challenge — recognizing what it refers to in a sentence. Another example is the inability to answer simple questions like, “Is it safe to walk downstairs backwards if I close my eyes?” A third type of example is the brittleness of language translation programs. Yet another Read More ›

Superintelligent AI Is Still a Myth
Neither the old classical approaches nor the new data scientific angle can make any headway on good ol’ common senseThe official Winograd Schema Challenge, organized by Levesque and friends to see if AI could learn common sense, was retired officially in 2016 for the embarrassing reason that even the well-funded bleeding age Google Brain team performed poorly on a test set of a few hundred questions.
Read More ›