Mind Matters Natural and Artificial Intelligence News and Analysis

TagYann LeCun

confused-robot-stockpack-adobe-stock
Confused robot

If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?

With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do they

In yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.” Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out? It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain Read More ›

smartphone-screen-with-smart-home-technologies-on-a-blue-backdrop-internet-of-things-isometric-conceptual-image-digital-residence-utilizing-a-mobile-phones-fingerprint-to-get-access-to-iot-systems-stockpack-adobe-stock
Smartphone screen with smart home technologies on a blue backdrop. Internet of things isometric conceptual image. Digital Residence utilizing a mobile phone's fingerprint to get access to IOT systems

The LLM Deep Fake—Follow the Money

Hype sells. Just look at what's happening with AI

Large language models (LLMs) are far more impressive now than they were just a few months or a year ago. Compare these exchanges I had with GPT-3 in June 2022 and with ChatGPT two days ago: Gary:          Can I improve my test scores if I study after taking the test? GPT-3:        That’s a good question. I’m not sure if you can improve your test scores by studying after taking the test, but it couldn’t hurt to try! ChatGPT:   Studying after taking a test is unlikely to directly improve your scores on that particular test since the test has already been completed. Once a test is finished, the opportunity to answer questions and provide responses has passed. However, studying after taking Read More ›

black-and-white-computer-keyboard-keys-close-up-concept-of-unstructured-big-data-that-need-to-be-sorted-ready-to-be-consumed-by-machine-learning-model-for-deep-learning-stockpack-adobe-stock
Black and white computer keyboard keys close-up. Concept of unstructured big data that need to be sorted ready to be consumed by machine learning model for deep learning.

Large Learning Models Are An Unfortunate Detour in AI

Gary Smith: Even though LLMs have no way of assessing the truth or falsity of the text they generate, the responses sound convincing

For decades, computer scientists have struggled to construct systems possessing artificial general intelligence (AGI) that rivals the human brain—including the ability to use analogies, take into account context, and understand cause-and-effect. Marvin Minsky (1927–2016) was hardly alone in his overly optimistic 1970 prediction that, “In from three to eight years we will have a machine with the general intelligence of an average human being.” AGI turned out to be immensely more difficult than imagined and researchers turned their attention to bite-size projects that were doable (and profitable). Recently, large language models (LLMs) — most notably OpenAI’s GPT-3 — have fueled a resurgence of hope that AGI is almost here. GPT-3 was trained by breaking 450 gigabytes of text data into Read More ›