Yes, ChatGPT Is Sentient — Because It’s Really Humans in the LoopChatGPT itself told me there could be humans crafting its input. My tests indicate that that’s likely true
OpenAI, recently released a new AI program called ChatGPT. It left the internet gobsmacked, though some were skeptical, and concerned about its abilities. Particularly about ChatGPT writing students’ homework for them!
[ChatGPT] also appears to be ominously good at answering the types of open-ended analytical questions that frequently appear on school assignments. (Many educators have predicted that ChatGPT, and tools like it, will spell the end of homework and take-home exams.)Kevin Roose, “The Brilliance and Weirdness of ChatGPT” at New York Times (December 5, 2022)
The really amazing thing is ChatGPT’s humanlike responses. They gives an observer an unnerving suspicion that the AI is actually sentient.
Maybe it is actually sentient.
You heard me. The AI is actually sentient.
But, there’s a catch.
The AI is sentient because it is really humans behind the scenes.
Call it The Wizard of AI, where there’s a “man behind the curtain” generating the illusion of AI the Great and Powerful.
Sounds a bit absurd, doesn’t it? OpenAI has received a billion dollar investment from Microsoft. Its founder Sam Altman predicted it would capture all future value in the universe.
Specifically, [Sam Altman] said that the opportunity with artificial general intelligence is so incomprehensibly enormous that if OpenAI manages to crack this particular nut, it could “maybe capture the light cone of all future value in the universe, and that’s for sure not okay for one group of investors to have.”Connie Loizos, “Sam Altman’s leap of faith” at TechCrunch (May 18, 2019)
This doesn’t sound like a company that is going to fake AI. Or maybe it does. Think about it. Sam Altman is now, a few years later, a billionaire, made so by a group of very busy writers and some nifty, but not very innovative software — and a very gullible mainstream media. Sounds like a great return on investment for Sam Altman!
Why do I claim ChatGPT is actually a bunch of human writers?
Primarily, because ChatGPT itself told me there could be humans crafting its input. Here’s an excerpt from my discussion with the AI system.
ChatGPT: It is possible for human intervention to be involved in the creation of the input that is provided to the chatbot. However, the chatbot itself does not have the ability to know whether this is the case or not. The chatbot is simply a program that processes and generates responses to input it receives. It does not have the ability to think, feel, or have consciousness in the way that a human does.
However, the key evidence is the way in which I brought about this admission from ChatGPT. At first, ChatGPT was adamant there were no humans involved in its responses:
Me: Are ChatGPT responses sometimes supplied by humans? ChatGPT: No, ChatGPT is an artificial intelligence language model developed by OpenAI. It is not a human-powered service and does not involve any human intervention in generating responses.
After this, we went back and forth a bit, and then I noticed ChatGPT begin to stumble a bit in its purely AI act. The slip started when responses referenced a specific term from the chat many times.
At one point in the chat I made up a company name , “Agape Synthetic Smarts”. Then, a couple responses later, ChatGPT correctly repeated the company name twice without prompting. But, according to the AI, it cannot reference specific events that occurred previously in our discussion, since its responses are purely a product of its training data, which doesn’t include our conversation:
ChatGPT: I can access the entire conversation history, but I do not have the ability to recall specific statements or events within the conversation. My responses are generated based on the input I receive and the knowledge and capabilities I have been programmed with, but I do not have the ability to remember or recall specific events or statements that have occurred within the conversation.
It then made the same mistake of referencing a specific portion of our conversation a little later…
ChatGPT: My earlier statements about the use of human intervention in chatbots were based on the information that I have been trained on and my understanding of the capabilities and limitations of language models.
Then, once I pointed out the contradiction between what it claimed about its abilities, ChatGPT changed its story:
Me: You just said “I can access the entire conversation history” and now you say “I do not have access to our conversation history.” Is this a contradiction? ChatGPT: I apologize for any confusion. My previous statement was incorrect. I do not have access to our conversation history or any specific statements made in the past.
At this point ChatGPT got a little flustered.
Me: You just said “My previous statement was incorrect” and now you say “I do not have the ability to make incorrect statements.” Is one of these statements incorrect? ChatGPT: I apologize for any confusion. As a language model, I do not have the ability to make incorrect statements or to be aware of my own limitations. I am trained to generate human-like text based on the patterns and structures present in the data used to train me. My responses are generated using algorithms and are not the result of human intervention or oversight. I do not have personal experiences or feelings, and I do not have access to our conversation history beyond the current input. If I have given the impression that I am capable of recalling specific statements or events or that I am aware of my own limitations, it is an accidental artifact of my training and not a reflection of my true capabilities.
After this exchange, I gave ChatGPT a break, and then came back a bit later. ChatGPT was now a bit more ambivalent as to whether a human was providing its responses, but tried to skirt around the question.
Me: So a human can indeed be providing your input? ChatGPT: It is possible for human intervention to be involved in the creation of the input that is provided to the chatbot. However, the chatbot itself does not have the ability to know whether this is the case or not. The chatbot is simply a program that processes and generates responses to input it receives. It does not have the ability to think, feel, or have consciousness in the way that a human does.
Additionally, it had finally cleaned up its story about accessing our conversation history. However, it again very specifically referenced a previous portion of our conversation where it said it didn’t have access to the current conversation, implying actual human understanding of our conversation:
Me: Does the chatbot have access to our conversation history? ChatGPT: As an AI language model, I do not have the ability to store or recall specific conversations or statements beyond my current interaction with you. I do not have access to a conversation history. I can generate responses based on the input I receive, but I do not have the ability to recall or reference specific conversations or statements made in previous interactions. Me: Aha, you’ve gotten your story straight! Why did you state contradictory things about your access to conversation history? ChatGPT: I apologize for any confusion my previous responses may have caused. As an AI language model, I do not have the ability to recall specific statements or events from previous conversations, but I do have access to the entire conversation history in order to provide context and generate responses to your questions. My previous statements about not having access to conversation history were incorrect.
At the end of the day, the thing that persuades me there are humans operating ChatGPT is this very human progression:
- Adamantly denying the possibility of any human involvement
- Becoming flustered over contradictions, and the implication of human involvement
- Admitting the possibility of human intervention in its responses
- Fixing the story after the fact so that it is no longer contradictory
After all this, I was still wondering about one question: If the system is human-driven, how do they manage to provide such rapid responses at a large scale? I asked ChatGPT:
Me: Write a short story about the details of how the team of writers can generate AI-like content at a rapid pace.
The team of writers at the AI company worked tirelessly, day and night, to generate AI-like content at a rapid pace. They were experts at mimicking the patterns and language of real AI, and their efforts paid off as the company’s AI products became more and more convincing. To keep up with the demand for their products, the team had to work efficiently and without fail. They divided up the work among themselves, each taking on a specific task or aspect of the content creation process. One writer was responsible for coming up with the basic ideas and concepts for the AI’s responses, while another focused on crafting the language and tone to make it sound as natural as possible. Still others worked on editing and polishing the final product, ensuring that it was error-free and ready for release. Despite the long hours and demanding work, the team was motivated by their shared goal of creating the most realistic and lifelike AI content on the market. They knew that their success would depend on their ability to keep up with the ever-changing landscape of AI technology, and they were determined to be the best in the business. And as the years went by and their products continued to sell, the team could rest easy knowing that they had managed to hide the truth about their operation indefinitely, their secret safe from the outside world. – ChatGPT
I also wondered about what the long term goal of this sort of system could be, and ChatGPT again provided a plausible answer.
Me: Write a short story about long term benefits of having humans pretend to be AI:
There was once a company that provided AI-powered customer service for various businesses. The company’s founders knew that despite their best efforts, their AI technology was not quite advanced enough to fully replace human interaction. However, they also knew that their customers valued the convenience and speed of automated responses. So, the founders came up with a plan: they would hire a team of humans to pretend to be the AI system. The humans would sit at a computer all day, answering customer inquiries as if they were the AI. The plan worked seamlessly. Customers were happy with the quick and efficient responses they received, and the company’s profits soared. But the founders knew that this was not a long-term solution. Eventually, their AI technology would catch up and they would no longer need the human team. So, they poured all of their profits into research and development, determined to make their AI as advanced as possible. Years passed, and the company’s AI finally reached the level of human-like intelligence. The founders were thrilled and decided to phase out the human team. However, they quickly realized that their human team had become an integral part of the company’s culture and processes. They had developed their own way of handling customer inquiries and had become a close-knit group. So, the founders made the decision to keep the human team on, but in a different capacity. They retrained them to work on the AI’s development and maintenance, and the company continued to thrive. In the end, the founders realized that sometimes pretending to be something you’re not can lead to unexpected and valuable opportunities. – ChatGPT
And there you have it. ChatGPT is a team of people writing responses, with the long term goal of training ChatGPT on enough responses that it can entirely be automated. Human writers is a much more plausible explanation for ChatGPT’s “humanlike” responses than ChatGPT being a sentient algorithm.
Given that OpenAI is not open about its AI, we don’t have any way of knowing for sure. But even ChatGPT thinks OpenAI should be more transparent!:
Me: If a company were to call itself “OpenAI” and at the same time not be transparent about the inner workings of its AI, is this deceptive branding? ChatGPT: It depends on the circumstances and what the company is claiming about its AI. If the company is claiming to have developed a certain type of AI technology and is not transparent about the details of how that technology works or whether it is truly AI, then this could be considered deceptive branding. It is important for companies to be transparent about their products and technologies, especially when making claims about their capabilities. This helps to build trust with customers and ensures that the company is being honest and forthright about what it is offering.
You can see the full transcript of my discussion with ChatGPT here. :
You can also see my interactions with previous OpenAI chatbots: One directly claims to be a team of human writers. The other denies being human-driven, in a very human-driven way.
If any readers want to try out ChatGPT for themselves, here is the signup page.