Fearing the Terminator: Does Current Tech Warrant the Doomsaying?
People will worry less if they understand why the text generation programs not only do not think but in fact cannot thinkAs I noted yesterday, the new book If Anyone Builds it, Everyone Dies: Why Superhuman AI Would Kill Us All (2025), by Eliezer Yudkowsky and Nate Soares, has garnered a lot of attention, mostly positive, from many well-known and well-respected commentators.

To restate their core arguments:
1. Modern AI — such as ChatGPT — is grown, not crafted.
2. When you grow something, you lose control over what it becomes.
3. Lastly, these “alien minds” will develop their own “desires” and “goals” which, like any good organism, they’ll seek to fulfill, most likely killing us all as collateral damage.
Their conclusion (3.) relies, in part, on growing an AI and, therefore, losing control. It’s worthwhile to look closely at the technology. Are modern AIs grown, as they assert, or is there something else going on?
What is AI?
First, there is no such thing as a single, simple “AI.” Rather there is a suite of technologies, loosely related to one another, that fall under that heading. And, even when the technology is nearly identical, the specific instances and their uses are not.
For example, the technology underlying the identification of faces in photos is similar to that which enables voice recognition. Both, in some sense, detect a well-defined pattern — a face or a command — in a sea of input. The same technology is useful for examining (recognizing, but in a different sense) various medical conditions, when a a set of symptoms, images, and what not is given. Product recommendation software is similar: given past purchase history, what might be likely future purchases?
Each of these applications, however, is specifically built, trained, and tuned for that one use. There are no “universal” AIs. The machines that pick out faces will fail at medical diagnostics. And an application which is useful in medical diagnosis won’t recognize your voice command.
If you flip these uses upside down, in a sense, you get generative AI. ChatGPT is possibly the most widely known example. The face, voice, or medical recognition software applications take in a collection of data — the image, the sounds, the symptoms — and return a specific one — a face, a command, or a diagnosis. Generative AI, by contrast, takes in — loosely speaking — a command (or prompt) and returns a collection of data (the text output). As with the other applications, there is no one generative AI; OpenAI alone publishes multiple models, some of which they’ve trained to do well in math or science contests.
Longstanding problems
The longstanding problems in AI are two: How do we collect the knowledge needed to fuel the AI? And how do we (efficiently) get an AI to correctly use that knowledge? Expert Systems — to use an example popular in the 1980s — sought to extract knowledge from experts in a field, encode that knowledge into rules, and then apply those rules to solve problems. Expert Systems are an example of crafted AI: Engineers built the AI. Crafting AI has, largely, not worked well. It turns out we don’t know what we know and encoding what we know efficiently is really hard. These failures led to an AI winter lasting until the early 2000s.
Two changes, brought on by the internet, led to a thaw and thus to where we are now: First, we learned how to bring thousands, even tens of thousands of machines to bear on a single problem, such as web search. Second, the internet became a rich source of data, both images and text, on nearly every topic. These changes enabled the resurgence of a technique (with multiple improvements and refinements) that was first invented in the 1970s: neural networks.
Engineers do not grow a neural network

Neural networks start by knowing nothing. The values contained — the parameters, of which models like GPT 4 or 5 may have over a trillion — start with uncorrelated, random values. Engineers then train the model through a process that slowly tweaks those values until they reflect correlations found in the training data.
Engineers do not grow an AI, the algorithms extract correlations from training data. Modern, neural networks are pattern catchers. They catch patterns (e.g., subject-verb-object of an English sentence), patterns of patterns (e.g., the structure of an English paragraph), and patterns of patterns of patterns (e.g., an author’s voice).
Since text generative models like ChatGPT are the most familiar, let’s explore their process a bit more. What we’ll discover, though, applies to any modern neural network-based AI.
Text, tokens, and meaning
If you believe that engineers train text generative AI on raw text, you’d be only half-right. Text, scoured from the internet and other sources, does lie at the bottom. But that is not what the networks “see” or extract correlations from.
Human text, like this article, consists of words, sentences and other structures. While you could train a network using just text, it’s doubtful that the network would work well. Our words vary too much in structure while preserving a good deal of meaning.
Consider simple plurals: Word vs Words. To a computer trained on words, those are different words. A lot more data is needed to extract the relationships that draw those terms together.
At the other extreme, you could train a network using characters: W, o, r, d. But that clearly would not yield meaning without more data than humanity has ever produced.
Text generative AI, as result, gets trained on tokens: a string 3-4 characters long. Tokens are a useful compromise between the two extremes. Programmers do not choose tokens randomly, but extract them from text representative of the full training data. Without going into detail, the winning tokens are those that best fit the data.

What is important to remember is this: Tokens are not (necessarily) words. They are building blocks useful for breaking down text.
This matters because words, not tokens, carry meaning. The word “therapist” means something which its tokenization into [“thera”, “pist”] lacks. This tokenization is an example; the actual tokenization used in ChatGPT likely is different. The point remains: Tokens are (by and large) meaningless; words are meaningful.
Trading correlations for meaning
If a text-generative AI only “sees” tokens, then how does it capture “meaning”? It doesn’t. Text generative AIs do not ever capture meaning as we understand it. They do capture relationships which sufficiently approximate meaning to be useful.
How does this work?
At the start of training, a text generative AI associates a token (e.g., “thera”) with a very long string of numbers. Think of those numbers as identifying a location, like coordinates on a map, but with many more dimensions (not something we can really visualize). Training entails refining those coordinates such that tokens which correlate with one another find themselves “nearby.” Multi-layer models extract, not just semantic correlations (e.g., that “dogs” and “cats” are household pets), but structural correlations as well (e.g., the subject, verb, object of an English sentence noted above).
The working assumption — which mostly works when you have sufficient data — is that correlations carry a semblance of meaning. They can never carry true meaning, if for no other reason than that tokens, themselves, are often meaningless.
Gathered, not grown
This process is what Yudkowsky means when he states that modern AIs are “grown.” But is he correct? He is correct in stating that engineers do not know the “meaning” of the numbers —the resulting parameter values the model leans on to generate output — populating a trained AI. We cannot identify which parameters and paths through a complex, text generative AI led to this particular output. He is, however, confused when he refers to “meaning” and AIs as being “grown.”
Engineers do not grow text generative AIs. They guide the population of a set of internal values through a well-defined process that gathers (extracts) correlations from the textual training data. It is impossible, by definition — since the models rely on units (tokens) that eschew meaning — for a model to gather meaning. All they can collect is correlations between tokens. The model will put dogs and cats near one another without “knowing” anything about dogs or cats or animals in general.
Without meaning, there is no mind; there is no “reasoning.” There are only “this is near that” numerical correlations. These machines are not artificially intelligent, they have embedded intelligence scraped from humanity’s output.
But what if a model were to use words that carry meaning instead of tokens? Could they then develop into a mind? We’ll look next at why Yudkowsky and others, including many Accelerationists, believe that AIs could, develop full-fledged minds, like our own or better than our own.
Here’s the first part of my look at the arguments in this thought-provoking new book:
Fearing the Terminator, Missing the Obvious. In Part 1 of my review of the new AI Doom book, If Anyone Builds It, Everyone Dies, we look at how the authors first developed the underlying idea. By 2020, authors Yudlowsky and Soares were already Doomers but the rapid success of ChatGPT and similar models heightened their worries.