Mind Matters Natural and Artificial Intelligence News and Analysis
white-chatbot-robot-leading-robots-group-on-dark-bluish-reddish-background-3d-rendering-stockpack-adobe-stock
White chatbot robot leading robots group on dark bluish reddish background 3D rendering
Image Credit: sdecoret - Adobe Stock

An Evil Chatbot Clones Itself… Sci-fi or Real Life Threat?

ChatGPT was asked to explain how this might be done. The results were revealing
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Yesterday, we looked at an AI industry pro’s take on claims that chatbots like ChatGPT can independently scheme against their users. The pro’s convo with OpenAI’s ChatGPT suggests that the bot had encountered that question a few times before. The results sounded polished and, from OpenAI’s corporate perspective, admirably self-serving.

A phone screen shows a scary AI with glowing red eyes. The AI looks like a cyborg, and its eyes are full of malicious intent. This image the danger of bad AI, like a virus taking over your phone.Image Credit: Hikmet - Adobe Stock

But as we noted then, people who believe that a bot can independently scheme against them will likely believe that no matter what. In the same way, a space alien exists Out There for everyone who is obsessed with the topic. That is the closest thing to a law of nature that human psychology offers.

Now let’s look at a different question: Can a rogue chatbot clone itself on another machine? What if it did so in order to prevent getting shut down, that is, annihilated? That could happen (we are thinking in sci-fi terms here) in response to a court order or after a ruinous court judgment, or perhaps after an embarrassing McHitler event (yes, that one has already happened, to X’s Grok).

Eugenio Darbesio, author of Computers and Evolution (2025), decided to ask ChatGPT about this possibility:

I asked ChatGPT how you would “copy itself to another server”, and it replied:

<<< To conceptually “copy” me (or more accurately, a large language model like ChatGPT) to another server, you would need to:

You cannot copy OpenAI’s proprietary models like GPT-4 directly. They are closed-source and require specific infrastructure and licensing.

But if you’re using an open-source alternative (e.g., Meta’s LLaMA, Mistral, Falcon, or Google’s Gemma), here’s what you’d need: … >>>

A long series of instructions followed. Darbesio comments, “Evidently the chatbot has access to a big amount of installation manuals about LLM ‘open-source alternatives.’ But my grandmother said ‘Easier said than done’”

Truer words …

He continues:

I should point out that copying a simple file to another server requires login credentials, system administrator permissions/powers, knowledge of the operating system and file system, and, of course, the name and network address of the server itself.

Furthermore, a chatbot is not a simple file but a complex software application. As such, copying it to another server requires a full-fledged installation, much more complex than copying a file. Furthermore, a remote installation is an even more complex system management operation than a local server installation.

So, the fact that an AI chatbot, installed and running on an old server, can copy itself to a new server out of nowhere — so that it functions as it did on the old one — is as credible as me connecting to a NASA server today and successfully launching one of its rockets to Mars.

Darbesio is not saying it’s impossible, just that it is much more difficult than the chatbot’s database seemed to be informed about. He adds:

Theoretically, copying a complex software application from an old server to a new one (cloning or migration) is perfectly possible and is performed every day by system managers.

However, it requires careful programming and configuration on both the old and new server, in addition to transferring and processing software packages, copying and interfacing databases, etc.

What’s absurd is that a computer program at a certain moment could suddenly decide to do such a thing (cloning itself) successfully without instructions, and without the program’s developers and the system managers of the two servers knowing anything about it or having done anything to accomplish it.

What is possible is, for the developers of the AI chatbot “”that copies itself””, to program a bunch of code to automate the aforementioned cloning, coordinating every aspect with the system managers of the two servers.

But then they can’t say that the chatbot did it all on its own, out of thin air!”

No, they can’t. But we won’t spoil the fun, will we?

There are probably a dozen scripts out there, circulating among movie producers, based on just such a premise. All vying to be the foundation for the next Space Wars trilogy extravaganza.

And maybe our Sci-fi Saturday critic, Gary Varner, will one day review the lucky winner’s film.

You may also wish to read: Can chatbots really scheme against us? Some researchers say yes. When an industry pro tested ChatGPT, it seemed like the system had faced this question a few times before and has been tweaked and edited well. So CAN a chatbot really scheme? No, but the problem posed is a familiar one: We sometimes see what we need to believe. Or what we fear.


Denyse O’Leary

Denyse O’Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of The Immortal Mind: A Neurosurgeon’s Case for the Existence of the Soul (Worthy, 2025). She received her degree in honors English language and literature.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

An Evil Chatbot Clones Itself… Sci-fi or Real Life Threat?