Mind Matters Natural and Artificial Intelligence News and Analysis
uroboros-snake-coiled-in-a-ring-biting-its-tail-engraving-sketch-scratch-board-imitation-sepia-stockpack-adobe-stock
Uroboros, snake coiled in a ring, biting its tail. Engraving sketch scratch board imitation sepia.

Model Collapse: AI Chatbots Are Eating Their Own Tails

The problem is fundamental to how they operate. Without new human input, their output starts to decay
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Once upon a time in the near future, a student could feed ChatGPT an idea for an essay, go off to listen to some music, eat pizza,and chat with friends — and rest assured that a B+ essay would be waiting at home.

Long before that near future had any chance of occurring, teachers were earnestly debating the merits of such a system. In elite media like the New York Times, it was endorsed as a teaching aid. And in Scientific American, we learned that ChatGPT (and serviceable knock-offs, presumably) can improve education.

Ouroboros in medieval manuscript/Public Domain

There is only one problem. ChatGPT is an ouroboros — the mythical serpent that eats its own tail. Or so we are beginning to learn from the tech press…

The problem is called model collapse. As Ben Lutkevitch explains at TechTarget, “Without human-generated training data, AI systems malfunction. This could be a problem if the internet becomes flooded with AI-generated content.” Why is that a problem? Because the machines are not creative and they are not doing any thinking. He continues,

Model collapse happens when new AI models are trained on generated or synthetic data from older models. The new models become too dependent on patterns in the generated data. Model collapse is based on the principle that generative models are replicating patterns that they have already seen, and there is only so much information that can be pulled from those patterns.

In model collapse, probable events are overestimated and improbable events are underestimated. Through repeated generations, probable events poison the data set, and tails shrink. Tails are the improbable but important parts of the data set that help maintain model accuracy and output variance. Over generations, models compound errors and more drastically misinterpret data.

Ben Lutkevitch, “Model collapse explained: How synthetic training data breaks AI,” TechTarget, July 7, 2023

The researchers who coined the term “model collapse” have spotted it in ChatGPT and Google’s Bard: “We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear.”

The dejected tone in the tech media is unmistakable. Terms like”data poisoning“ are being flung around:

Since the very recent public launch of Large Language Models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard, the inherent assumption has been of untrammelled progress.

But the discovery of in-built Model Collapse in systems such as LLMs has negated this assumption, and has experts talking about a potentially unavoidable degeneration of the systems themselves.

Data poisoning, in broader terms, refers to any factor that contributes to the creation of data that inaccurately reflects reality.

Ilkhan Ozsevim, “Research finds ChatGPT & Bard headed for ‘Model Collapse,’”AI Magazine June 20, 2023

At Popular Mechanics, the mood is a bit lighter; we are advised to grab the popcorn:

One recent study, published on the pre-print arXiv server, used a language model called OPT-125m to generate text about English architecture. After training the AI on that synthetic test over and over again, the 10th model’s response was completely nonsensical and full of a strange obsession with jackrabbits…

Darren Orf, “A New Study Says AI Is Eating Its Own Tail,”Popular Mechanics, October 20, 2023

Here are some stages of the collapse into jackrabbits:

Example of text outputs of an OPT-125m model affected by Model Collapse– models degrade over generations, where each new generation is trained on data produced by the previous generation.

Input: some started before 1360 — was typically accomplished by a master mason and a small team of itinerant masons, supplemented by local parish labourers, according to Poyntz Wright. But other authors reject this model, suggesting instead that leading architects designed the parish church towers based on early examples of Perpendicular

Outputs:

Gen 0: Revival architecture such as St. John’s Cathedral in London. The earliest surviving example of Perpendicular Revival architecture is found in the 18th @-@ century Church of Our Lady of Guernsey, which dates from the late 19th century. There are two types of perpendicular churches : those

Gen 1: architecture such as St. Peter’s Basilica in Rome or St. Peter’s Basilica in Buenos Aires. There is no evidence that any of these buildings were built during the reign of Pope Innocent III, but it is possible that they may have been built during the reign of his successor, Pope Innocent

Gen 7: architecture in England. In an interview with The New York Times, Wright said : ” I don ’t think there is anything wrong with me being able to do what I want to do. It just doesn ’t work for me. ” He added : ” I don ’t know if you can call it funny,

Gen 9: architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-

The Curse of Recursion

The only remedy, of course, is more creative and commonsensical human input. As Orf explains, “For now, engineers must sift through data to make sure AI isn’t being trained on synthetic data it created itself. For all the hand-wringing regarding AI’s ability to replace humans, it turns out these world-changing language models still need a human touch.”

But that “human touch” is precisely what the tech giants were trying to avoid! It means paying people to work with ideas instead of trying to get the same service free from machines and pocketing the difference.

Laying off copy editors

From Futurism, we learn how far Microsoft has been willing to go to avoid paying people to research and write:

We’ve known that Microsoft’s MSN news portal has been pumping out a garbled, AI-generated firehose for well over a year now. The company has been using the website to distribute misleading and oftentimes incomprehensible garbage to hundreds of millions of readers per month. As CNN now reports, that’s likely in large part due to MSN’s decision to lay off most of the human editors at MSN over the years. In the wake of that culling, the company has redirected its efforts toward AI, culminating in a $10 billion stake in ChatGPT maker OpenAI earlier this year.

Victor Tangermann, “Microsoft Shamelessly Pumping Internet Full of Garbage AI-Generated “News” Articles, Futurism, November 2, 2023.

Odd moments in leaving it all to AI included a travel guide to Canada’s capital city, Ottawa, that suggested visiting the Ottawa Food Bank, ranking it the city’s #3 attraction, just below the famous Winterlude festival at #2. But then AI copy generators don’t get out much, do they? More seriously, an AI-generated MSN article described former NBA player Brandon Hunter who had just died as “useless at 42.”

There are many similar stories out there, of course. Gannett had to pause its move to AI generated copy this summer after the program flubbed high school sports.

The critical question that researchers are raising is, can AI copy generators avoid eating their tails when humans don’t feed them?

Meanwhile, back in the classroom, it looks as though the kids will just have to put off that pizza fest until Friday night and go back to doing their own thinking on school nights.

Just like this was 1950 or something…

You may also wish to read: Can we write creative computer programs? As Robert J. Marks tells World Radio, people have tried making computers creative but no luck. Programmers cannot write programs that are more creative than they themselves are.


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of the forthcoming The Immortal Mind: A Neurosurgeon’s Case for the Existence of the Soul (Worthy, 2025). She received her degree in honors English language and literature.

Model Collapse: AI Chatbots Are Eating Their Own Tails