Mind Matters Natural and Artificial Intelligence News and Analysis
three-pigs-laptop
Image licensed via Adobe Stock

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

We can do a simple experiment that demonstrates the difference between AI and human intentionality.

ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past.

In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future.

In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has increased the window tenfold to 30,000 words.

The important point is since ChatGPT is a Markov chain, then we can gain insight into its behavior and limitations by analyzing a simpler Markov chain model, without spending millions of dollars to train our own ChatGPT.

For Humans, More Reading Equals Better Writing

So, let’s think about one hallmark of human reading and writing. It is that the more humans read of other humans, and practice their own writing, the better their writing gets.

If ChatGPT’s behavior was equivalent to human intentionality, then we would see the same thing occur.  ChatGPT’s output would become better and better the more it consumed of its own output.

We can test this idea with a simpler Markov chain, by training it on its own output.

I wrote a small Markov generator that uses a window of a few characters and trained it on the Three Little Pigs. I then retrained the Markov chain on its output over and over again.

First, here is the initial output it generates. It is not ChatGPT quality, but the Markov generator can generate some coherent words and phrases.

"came a house wolf was and get so them so then the churn and said pletree so the puff and he next down at fire the little pigthe little pig jumped himthe five oclocked and ill huff and atehim very said little pig i know which frightened down the went three little pig are you will be readyat thehouse with a butter again mr smiths housethe little pigsand said the wolf came hairhe hilling netthe throw when this built a nice and said that time again before me are your housethe little pig jumped doing i saw which as he huffed a home as ver after churnips and i i frightenedthe will blow the wolf fell the had to hillthen the little pig little pig littlepigthe little pot enought the puffed an d it began to hehuffed and pig are you will go the was house intothe wolf and bustle pigwhat time to pick it be readyreat round the went off before six when he saw the thing down with alonga wolf came come been to keep the man with that up tothe wolf and if you got into it presen"

Now let’s see what happens after many feedback iterations.

"you withosefright the bricks the blewthe bricks the blewthe blewthe bricks the blewthe bricks the bricks the blewthefair into hiled at the bricks the bricks the bricks the bricks the bricks the bricks the bricks the bricks the bricks the bricks the bricks the bricks the bricks the bricks the bricks the bricks the blewthe bricks the bricks the bricks the blewthe bricks thefair into hiled at thefair into hiled at the blewthe bricks the bricks the bricks the bricks the bricks the bricks the bricks the blewthe bricks the bricks thefair into hiled at the bricks the bricks thefair into hiled at the bricks the bricks the bricks the bricks the bricks the blewthe bricks the bricks thefair into hiled at thefair into hiled at the bricks thefair instant bout off furze tree little pot the bricks the blewthe bricks thefair into hiled at the bricks the bricks the bricks the blewthe ble"

As we can see, the Markov chain degenerated into a repetitive sequence.

Losing Instead of Gaining Information

Why does this happen? It’s because the Markov chain training process is inherently lossy. The Markov chain never learns new information that wasn’t already in the training data. It only ever loses information, similar to a second law of thermodynamics but applied to information.

The same observation applies to extremely sophisticated Markov chains like ChatGPT. They only ever lose information. So, if ChatGPT were trained over and over again on its own output, it’d eventually become complete gibberish.

This conclusively demonstrates that ChatGPT is fundamentally different from human intentionality.


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

AI vs. Human Intentionality