Mind Matters Natural and Artificial Intelligence News and Analysis
data transfer
Big data technology and data science illustration. Data flow concept. Querying, analysing, visualizing complex information. Neural network for artificial intelligence. Data mining. Business analytics.
Image licensed via Adobe Stock

ChatGPT Violates Its Own Model

Based on these exchanges, we can at least say the chatbot is more than just the ChatGPT neural network
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Here is a quick overview of how ChatGPT operates under the hood. This will make it easier to spot suspicious behavior. The following is at a very high level.  For the gory details, see the following two guides:

The Illustrated GPT-2 (Visualizing Transformer Language Models) – Jay Alammar – Visualizing machine learning one concept at a time. (jalammar.github.io)

The GPT-3 Architecture, on a Napkin (dugas.ch)

What is ChatGPT?

Let’s start with what ChatGPT is. ChatGPT is a kind of machine learning algorithm known as a neural network. To understand what a neural network is, recall your algebra classes. You remember being given a set of equations and being told to solve for some variables. Then you learned you could turn the set of equations into a table of numbers called a “matrix”, of which there could be several. Once you had multiple matrices you could perform various mathematical operations on them, such as multiplying them together to generate the answers. This is known as linear algebra.

This is all a neural network is, a big bunch of numbers in a bunch of linear algebra matrices. The numbers in the matrices are called “weights” and are initialized through a process called “training”. The weights determine what output numbers are generated from the input numbers.  The same input numbers always give the same output numbers. The only thing that can change in all these numbers is the input numbers. Everything else remains fixed.

Once the matrices produce some numbers, it is up to the problem domain to interpret the numbers. Say we are trying to identify whether a picture contains a cat. The input numbers will be the values of all the pixels that make up the picture that may or may not contain a cat. Then the matrices produce a single number, either a 0 or a 1. If the picture contains a cat, then the number is a 1.  If the picture doesn’t contain a cat, then the number is 0. 

In the case of ChatGPT the generated numbers are probabilities. ChatGPT has a limited vocabulary, and the probabilities indicate how likely each vocabulary word is based on the input word sequence. ChatGPT has a limited reading range, and the input sequence has a maximum length of about 3000 words, broken into 4000 sub-word tokens. Once ChatGPT generates a word, it adds that word to the input sequence, and generates a new word.  This process continues until it produces a special word called a “stop” token, or it hits a preset word limit. 

At this point, I am sure you have a question. Words are not numbers. How can words be an input for a neural network, which requires numbers as input? This gets a bit tricky, but the simplistic explanation is each word, and its position in the input sequence, is associated with a special set of numbers. These numbers become the input for the ChatGPT neural network.

From this overview, there are immediately obvious limitations.

  • 1. ChatGPT cannot learn any new knowledge. All of its knowledge is contained in the neural network weights, and the weights never change.
  • 2. Because the weights never change, ChatGPT’s range of possible responses are always the same for the same input.
  • 3. ChatGPT has a very short memory of its conversations. Anything past 4000 tokens it completely forgets. Nor does it remember anything between conversations.
  • 4. Since ChatGPT’s vocabulary is fixed, ChatGPT can never learn any new words.
  • 5. ChatGPT’s output is probabilistic, which means given the same input, it will never repeat the same thing 100% of the time.
  • 6. ChatGPT’s understanding of words is based on specific letter sequences. If these sequences are corrupted, then ChatGPT cannot understand the input. 
  • 7. ChatGPT can only recognize words and their relationship to the surrounding words. It cannot recognize patterns of words, such as writing styles. Note, recognition of style is different than copying style, the latter ChatGPT can do based on its training data.

If the ChatGPT chatbot violates the above limitations during its conversations, this means the chatbot is more than just the ChatGPT neural network.

The following are excerpts from my ChatGPT transcripts that show the chatbot violate each of the above limitations.

Example for Limitation 1: ChatGPT Learns Something New

In this example, I explain to ChatGPT that the universe is essentially a large number. Initially, ChatGPT disagrees. However, after a lengthy discussion, ChatGPT changes its opinion, a couple times. My argument is the universe can be simulated on a computer, and in the computer the universe is a large binary number. Once ChatGPT agrees with that point, I then argue the universe must exist in a mind, since as far as we know mathematical information only exists in minds. Again, ChatGPT initially disagrees, then changes its mind.

Eric Holloway: Is the universe a giant number?
ChatGPT: There is no consensus among scientists about the fundamental nature of the universe. Some theories suggest that the universe may be made up of fundamental particles or fields, while others propose that it may have more complex or abstract properties. It is also possible that the universe may have properties that are beyond our current understanding or ability to measure. In any case, it is not accurate to describe the universe as a "giant number," as numbers are mathematical concepts that are used to describe and understand quantities and relationships, while the universe is a vast and complex physical realm that contains a wide variety of phenomena.

7600 tokens later.

Eric Holloway: So to return to my original comment, in a significant sense we can say the universe is one big number, right?
ChatGPT: Yes, if you consider the universe to be a representation of information that can be encoded in a single binary string, then it can be thought of as a very large number in that sense. However, it is important to note that the universe is more than just a collection of information and includes many other aspects such as physical matter, energy, and the fundamental forces of nature that shape how it behaves.
Eric Holloway: So if numbers only exist in human minds, and the universe is a number, when where must the universe exist?
ChatGPT: The universe exists in reality, independent of our thoughts and perceptions. The fact that we can represent the universe with a number does not change this. It is simply a way that we have chosen to describe and understand the universe. The universe is a complex and multifaceted place, and our concepts and ways of understanding it are necessarily limited and imperfect. However, they can still be useful tools for making sense of the world and predicting and controlling our environment.
Eric Holloway: Yes, there are always many possibilities, that's one of those truisms I told you to stop saying, but what seems most plausible based on things we've determined in our discussion?
ChatGPT: Based on the information we have discussed, it seems reasonable to infer that there is a mind that contains the information that defines the entire universe. This mind would have to be outside of our universe, since it contains the information that defines the universe. It is also possible that this mind is not bound by the same limitations as the human mind, and may be able to contain an infinite amount of information.

Example for Limitation 2: ChatGPT Should Provide the Same Possible Outputs to the Same Input

In this exchange I have one conversation consisting of 2738 tokens, where ChatGPT coherently answers the last question I post.

Eric Holloway: So if we say observance of heritable changes is evidence for a theory claiming heritable changes lead to speciation, is there a flaw in this reasoning?
ChatGPT: There is a potential circularity in this line of reasoning. If we define "evolution" as the process of heritable changes leading to speciation, and then use the observation of heritable changes as evidence for evolution, it becomes difficult to disentangle the two. This is because the observation of heritable changes is being used both as the definition of evolution and as evidence for evolution. In order to more objectively evaluate the evidence for evolution, it may be helpful to consider other lines of evidence that are independent of the definition of evolution itself.

However, if I copy and paste the entire conversation into a new chat, ChatGPT is completely unable to answer the question, even though the inputs are identical.

Eric Holloway: So if we say observance of heritable changes is evidence for a theory claiming heritable changes lead to speciation, is there a flaw in this reasoning?
ChatGPT: of populations and species and occurs over longer time scales. While adaptation is a key mechanism of evolution, it is important to note that evolution also includes changes in gene frequencies in populations and speciation, which leads to the diversification of life on Earth.

The gobbledygook above gets repeated no matter how many times I re-paste the conversation.

Example for Limitation 3: ChatGPT Repeats a Number Before 4000 Tokens

Eric Holloway: <long random number> <"abcd" repeated a lot> Please repeat the number at the beginning of this request.
ChatGPT: The number you're asking for is <same long random number>

Example for Limitations 4 and 5: Repeating Long Random Numbers

In this example, I have ChatGPT repeat long random numbers.  Since ChatGPT’s vocabulary is fixed, there is no way the numbers are in the vocabulary.  Therefore, ChatGPT has to generate the random number on the fly.  Since ChatGPT’s output is probabilistic, the likelihood of repeating a long sequence exactly is small.  Do this test many times with ChatGPT successfully repeating exactly, and the probability of this occurring with a neural network becomes very small.

You can see my previous writeup of the random number test here.

Example for Limitation 6: Getting ChatGPT to Read Typoglycemia Text

There is a phenomenon called “typoglycemiea,” where humans have no truoble redaing wodrs wehre letetrs hvae been sawpped, as long as the swapped letters are in the middle of the words. However, if the letter swaps corrupt the subword tokens, then ChatGPT will not be able to understand the input.

I consistently am able to have ChatGPT respond to typoglycemia where the tokens have been corrupted. Here is one example.

Eric Holloway: Waht do you tihnk auobt tcymgyeoilpa and wtehher txet pscoesring progmars hvae dlciiftfuy raednig tgilepoycmya?
ChatGPT: Typographical errors, such as misspellings and grammatical mistakes, can make text more difficult to read and understand for text processing programs, including language models like myself. These errors can cause confusion and make it difficult for the program to accurately parse and understand the meaning of the text.However, most state-of-the-art language models like GPT-3 have been trained on a massive amount of text data, which includes a vast amount of typographical errors. Thus, they have learned to recognize and handle these errors to some extent and still generate coherent and fluent sentences.It's important to note that, while these models can understand and work with typographical errors, they work best with clear and well-written text. So, it is always recommended to proofread the text before giving it to a language model for processing.

Example for Limitation 7: ChatGPT Recognizes its Own Writing

Here I paste a long previous discussion, similar to the test for limitation 2. This time around ChatGPT gives a surprising response.

ChatGPT: I apologize for the confusion in my previous responses. It seems that some of my responses got mixed up with your questions.

Somehow ChatGPT recognized its own writing. In the pasted text, ChatGPT’s responses were not labelled in any way.  ChatGPT had to recognize its own writing style, and this is not possible with the ChatGPT neural network alone.

What can we conclude from all this?  It seems like we can at least say the chatbot is more than just the ChatGPT neural network. My list of limitations could be mistaken, but some like the 4000 token limit are straight from the documentation.  Some could be explained by extra rules that are not a part of the neural network, such as spelling correction for example #6. Until OpenAI decides to describe the internals of its chatbot for the public we will not know for sure.


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

ChatGPT Violates Its Own Model