Mind Matters Natural and Artificial Intelligence News and Analysis
site-of-a-train-derailed-accident-stockpack-adobe-stock-409526803-stockpack-adobe_stock
Site of a train derailed accident

Is Open AI — and Chatbots in General — the Next Train Wreck?

Gary Marcus notes the string of head office departures and both he and Jeffrey Funk look at the history of hype, hope, and long AI winters
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Yesterday AI commentator Gary Marcus described OpenAI — the ChatGPT chatbot company — as a slow-motion train wreck. He was referring to yet more high-profile departure via a series of puff piece magazine covers, commenting, “From left to right that’s Ilya Sutskever (now gone, less than a year later), Greg Brockman (on leave, at least until the end of the year), CTO Mira Murati (departure just announced) and Sam Altman (fired, and then rehired).” We’d heard of them (plenty, actually) but he also notes, “Co-founders John Schulman also left, so did Andrej Karpathy and Jan Leike, and perhaps a dozen others.”

Update from later in the day: “ChatGPT maker OpenAI is losing even more executives — right as Reuters reports that it’s looking to restructure its core business and wrestle control away from its non-profit board. … Meanwhile, the company is burning through cash at an alarming rate … ” (Futurism)

And productivity?

GPT-5 hasn’t dropped, Sora hasn’t shipped, the company had an operating loss of $5b last year, there is no obvious moat, Meta is giving away similar software for free, many lawsuits pending.

Yet people are valuing this company at $150 billion dollars.

Marcus, September 25, 2024

Henrietta, the pet chicken?

Marcus, who worked on AI for decades is an industry skeptic of these large language models. From engineering mag IEEE Spectrum last week,

What do you see as the main problems it keeps coming up against?

Marcus: Number one is hallucinations. These systems smear together a lot of words, and they come up with things that are true sometimes and not others. Like saying that I have a pet chicken named Henrietta is just not true. And they do this a lot. We’ve seen this play out, for example, in lawyers writing briefs with made-up cases.

Second, their reasoning is very poor. My favorite examples lately are these river-crossing word problems where you have a man and a cabbage and a wolf and a goat that have to get across. The system has a lot of memorized examples, but it doesn’t really understand what’s going on. If you give it a simpler problem, like one Doug Hofstadter sent to me, like: “A man and a woman have a boat and want to get across the river. What do they do?” It comes up with this crazy solution where the man goes across the river, leaves the boat there, swims back, something or other happens.

Sometimes he brings a cabbage along, just for fun.

Marcus: So those are boneheaded errors of reasoning where there’s something obviously amiss. Every time we point these errors out somebody says, “Yeah, but we’ll get more data. We’ll get it fixed.” Well, I’ve been hearing that for almost 30 years. And although there is some progress, the core problems have not changed.

Eliza Strickland, How and Why Gary Marcus Became AI’s Leading Critic, September 17, 2024

Marcus is not the only one who has noticed

Jeffrey Funk advised our readers recently, “Don’t listen to the tech bros on this one”:

… the early days of the Internet were very different from today’s early days of AI. There was an exploding market for Internet services, from e-commerce to content, during the dotcom bubble, while today the market for AI services is hard to find. This is why Sequoia’s David Cahn, Goldman Sachs’ Jim Covello, Citadel’s Ken Griffen, and long-time bull Ed Yardeni have talked many times about the AI bubble.

For example, not only isn’t there a huge commercial market for digital plagiarism by chatbots, they are attracting huge copyright lawsuits instead.

Funk asks, “Ask yourself, how many of today’s AI applications can even remotely compare to the convenience we got from buying things on the Internet in 2000?” True but it’s worse than that. The things the chatbots are automating are often things that many people feel shouldn’t be automated, like student essays…

He seconds Marcus’s warning too: “When you hear that AI is in its early days, be skeptical. Proponents of AI have been saying this for years if not decades. We are now on the third wave of AI hype (the second was while I was a graduate student at Carnegie-Mellon in the early 1980s) and the third wave started more than 10 years ago.” Meanwhile, he reminds us, outrageous statements proliferate:

In the last five to ten years, we have heard that ride hailing will eliminate all private cars and thus all parking lots, freeing millions of acres in space. Cloud kitchens will replace restaurants because we are too busy (and rich) to cook for ourselves. Neo-banks will replace traditional banks because they don’t need a building and algorithms can make better decisions than can humans. Crypto will prevent the federal reserve from enabling inflation. Tele-health will enable us to get good medical care without visiting doctors.

Jeffrey Funk, “Are We Close to Peak AI Hype?” Mind Matters News July 12, 2024

The best advice we can think of, in response to all that, is offered by business prof Gary Smith: During a gold rush, sell shovels. Don’t bet on gold — or AI — one way or the other; bet on something that will work for you.

Note: If things ever settle down at OpenAI, they could look at problems with the chatbot product in general like hallucination and model collapse.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Is Open AI — and Chatbots in General — the Next Train Wreck?