Mind Matters Natural and Artificial Intelligence News and Analysis

Erik J. Larson

3d white room with opened door. Brick wall

Is ChatGPT a Dead End?

There is still no known path to Artificial General Intelligence, including ChatGPT.

I want to talk more about Large Language Models (LLMs) and ChatGPT, as it’s all anyone asks me about when I give talks, either in Europe or here in the States. It doesn’t matter where. It’s always ChatGPT. Not self-driving cars. Not robotics. It’s the tech that Sam Altman dissed as “cloning human speech” that has apparently captured everyone’s attention. If I don’t talk about it, I’m not talking about AI. Got it! So I’ll talk about it. Garden Pathing AI Not to go all Altman on everyone, but I think LLMs are nothing but a “garden path” technology. Let me explain. In linguistics, a garden path sentence is one that starts out grammatically, but leads the reader to a dead-end. The Read More ›

Fuse burning on black background isolated

What Mission Impossible Tells Us About AI Mythology

If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you.

Tom Cruise — I mean Ethan Hunt — likes to run. He likes to ride motorcycles. He’s always down to speed race cars. He’s all in on leaping out of windows, base jumping, and hand to hand combat. And he always wins the heart of the beautiful girl. You’ll see all this in Mission Impossible: Dead Reckoning, Part One, the latest in the long-running film series, and the film series’ foray into “existential risk” thinking about superintelligent AI. “The Entity,” as it’s called, is the nemesis that Hunt and other members of the mum’s-the-word spy organization IMF (Impossible Mission Force) must confront, against all odds, as it’s smarter than any human, and learning constantly. The Entity cleverly captures personal information about Hunt’s Read More ›

Artificial intelligence (AI), machine learning and modern computer technologies concepts. Business, Technology, Internet and network concept.

Why ChatGPT Is Killing Off Traditional AI

We're living in another AI "winter"

The web proved that gathering data and using machine learning techniques resulted in superior performance on a number of central tasks in information extraction and natural language processing, like entity extraction, co-reference resolution, and many others (sentiment analysis et al). For all practical purposes, the debate about this raging on among AI scientists was resolved definitely by about 2010 — that the idea of hiring smart people to hand-code “knowledge” in a computer-readable language was quite limited. It had its day, to be sure, but it wasn’t a path to AGI. It wasn’t a path to anything other than hiring philosophers. My own career transitioned from doing the manual code-it-all-in approach to training and developing systems based on provision of data. Read More ›

History of science, concept. Isaac Newton with Apple in hand

Don’t Expect AI to Revolutionize Science 

Data science is a downstream phenomenon. Thinking isn't. 

The September 2023 cover of The Economist features a robot sitting under an apple tree, raising a finger to some Eureka! moment, after an apple falls from the tree and hits it on the head. Anyone even remotely familiar with the history of science knows the image belongs to Isaac Newton, who gave an account of an apple falling to the ground while sitting in his garden at Woolsthorpe Manor in 1666. As he later recounted, he asked himself why the apple should fall perpendicularly to the ground, which gave rise to the idea that the very same force pulling the apple to earth kept the moon falling to the earth, and the earth to the sun. The apple, in other words, Read More ›

Drawing gears

How Can We Make Genuine Progress on AI?

True progress on AI means moving beyond induction and data analysis. Researchers must start taking the “commonsense knowledge problem” seriously.

To younger generations who grew up on the web, it may come as a surprise that Big Data AI—the AIs trained to personalize newsfeeds, recognize friends and faces, and more recently converse with us using large language models like GPT—is but one approach to artificial intelligence. It’s also ancient, at least by the standards of the field. Neural networks (technically, “Artificial Neural Networks,” or ANNs) appeared as early as the 1940s and were usable for simple tasks in the 1950s. Then, they disappeared for most of the 1960s and 70s. An important innovation known as “backpropagation” appeared in the 1980s, but back then there weren’t huge volumes of data to train networks. They fell back out of favor, as rule-based approaches Read More ›