Mind Matters Natural and Artificial Intelligence News and Analysis

TagHector Levesque

confused-robot-stockpack-adobe-stock
Confused robot

If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?

With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do they

In yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.” Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out? It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain Read More ›

バイナリーコードの背景

Superintelligent AI Is Still a Myth

Neither the old classical approaches nor the new data scientific angle can make any headway on good ol’ common sense

The official Winograd Schema Challenge, organized by Levesque and friends to see if AI could learn common sense, was retired officially in 2016 for the embarrassing reason that even the well-funded bleeding age Google Brain team performed poorly on a test set of a few hundred questions.

Read More ›