Mind Matters Natural and Artificial Intelligence News and Analysis

TagArtificial intelligence and inference

confused-robot-stockpack-adobe-stock
Confused robot

If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?

With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do they

In yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.” Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out? It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain Read More ›

artificial-intelligence-white-ai-robot-thinking-and-looking-big-screen-monitor-of-big-data-on-blue-high-technology-for-the-future-rise-in-technological-singularity-background-generative-ai-stockpack-adobe-stock
Artificial intelligence, White AI robot thinking and looking big screen monitor of big data on blue high technology for the future rise in technological singularity background, Generative AI

Why, Despite All the Hype We Hear, AI Is Not “One of Us”

It takes an imaginative computer scientist to believe that the neural network knows what it’s classifying or identifying. It’s a bunch of relatively simple math
The AI scientist’s dream of general intelligence, often referred to as Artificial General Intelligence (AGI), remains as elusive as ever. Read More ›