Fearing the Terminator, Missing the Obvious
In Part 1 of my review of the new AI Doom book, If Anyone Builds It, Everyone Dies, we look at how the authors first developed the underlying ideaIf you were to survey the movers and shakers of Silicon Valley on the future of Artificial Intelligence (AI), you could lump the answers into two broad, somewhat overlapping categories: Accelerationists and Doomers.
Accelerationists — such as Elon Musk and Sam Altman — believe achieving Artificial General Intelligence (AGI), or better, Artificial Super Intelligence (ASI) could usher in a utopia that we should race towards as quickly as possible with as few hindrances (e.g., regulations) as possible.

Doomers — such as Max Tegmark, the authors of the AI 2027 report, and Geoffrey Hinton (the “Godfather” of AI) — on the other hand, fear that AGI, and especially ASI, could easily result in widespread harm to humanity. This “dark cloud” view doesn’t mean that Doomers believe we should abandon the pursuit of AGI or ASI. No, they just want us to head into that future with our eyes open, doing all we can to prevent tragedy. Well, most Doomers want that.
And this brings us to the new book by Eliezer Yudkowsky and Nate Soares, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (2025). They believe that if we don’t stop screwing around with machine intelligence, doom is sealed, for every last, single, one of us.
A lot of odd ideas float around the Valley that the world is best off ignoring. (For example, one notion is rejuvenating oneself by getting a blood transfusion from someone younger.) So, why pay any attention to either the Accelerationists or the Doomers? Specifically, who are these new guys and why does their book matter?
Let’s first back up a bit…
Computer science has walked hand-in-hand with the pursuit of artificial intelligence (a term coined by 1950s computer scientist John McCarthy) since the development of ENIAC (the first general purpose, programmable, digital computer). Those scientists saw an eerie parallel between then-prevalent views of the human brain —as a collection of switches that fire on and off — and their new machines.
From a materialist perspective, the mind is a by-product of the brain. Thus, the belief that we could create an artificial mind, a new form of machine-based intelligence, felt natural. And easy. So easy that the original researchers dedicated only a summer to the project. The project’s proposal read:
We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
They failed. But serious AI research began at that point.
AI progress ebbed and flowed over the years, entering multiple “AI winters,” because the problem proved hard—much, much harder, than computer scientists once believed.
AI broke free of its most recent winter (at least in the public’s mind) when IBM’s Watson program won at Jeopardy in 2011. AlphaGo followed with its own 2015 victory in Go — a game whose characteristics made it very challenging compared to, say, chess. Ken Jennings, the reigning Jeopardy champion who lost to Watson, later summed up the growing belief in AI: “I, for one, welcome our new computer overlords.”
Real progress in the field
Some of the techniques that enabled these victories offered practical benefits as well. They help us identify faces in photos, watch for credit fraud, guide product suggestions, and drive (well, sort of) our cars. Even Apple, often a laggard in these areas, sees value in embedding AI into their devices.
Image Credit: Jane Kelly - So, who is Eliezer Yudkowsky and where does he fit in?
Silicon Valley runs on personality as much as technology. And Yudkowsky is obviously a personality. Though he never completed high school or college, Yudkowsky received funding in 2005 from Peter Thiel for his Machine Intelligence Research Institute (originally the Singularity Institute for Artificial Intelligence). His co-author, Nate Soares, is now the Institute’s President. By way of a series of influential online essays, he is also prominent in the Rationalist community which is thought to influence many of the Valley’s founders. (Incidentally, the Rationalist community is also interwoven with Effective Altruism, which was promoted by Sam Bankman-Fried, who is currently serving a 25-year prison sentence for fraud. But that’s another story.)
The problem of aligning AI with human values
Yudkowsky’s initial goal for the institute was to accelerate “progress towards smarter-than-human AI.” Soon, however, he became convinced of the necessity of ensuring that an AGI or ASI aligned with “human values.” Thus, research at the institute shifted to what’s now known as, the AI Alignment Problem (AAP).
Put simply, how do we ensure that the AIs we build we will be for us and not against us? After all, if the AI is a lot smarter than we are and we cede to it all kinds of control (or it just takes control), it’d be nice if it were, in a sense, on our side.
Image Credit: Dackshina - His concerns fueled his essays (at the core of the Rationalist community), research, and workshops to solve the problem. But he grew increasingly pessimistic that we could solve the problem in time, especially as (it appeared to him) progress in AI was accelerating to the point that AGI, and possibly, ASI was coming into view.
By 2020, Yudkowsky and the Institute had completed the shift from Accelerationist to Doomer. Then the sudden and rapid success of generative AI, such as ChatGPT, exacerbated their worries.
But why? What had changed?
And here we get to the core argument of Yudkowsky and Soares. Stepping over a lot of detail, we can summarize their argument in three observations:
- Modern AI is grown not crafted. When we craft something we know how it works; when we grow something, especially mind-numbingly complex architectures such as those in generative AI, we have no clue what’s really going on.
- When you grow something, you’ve lost control over what it becomes. For example, they point to how evolution could have gone another direction depending on different, random events. So, too could grown AI. We can try to steer an AI in preferred directions, but, ultimately, we have no control over what it becomes. Guaranteed alignment with our values is not possible.
- Lastly, these “alien minds” will develop their own “desires” and “goals” which, like any good organism, they’ll seek to fulfill. It’s not so much that the Terminator will hate us as it just won’t care; we’ll be collateral damage.
Their argument has won praise from many well-respected voices, including: Fiona Hill (former senior director, White House National Security Council), Jon Wolfsthal (former special assistant to the President for national security affairs), and Ben Bernanke (Nobel laureate and former chairman of the Federal Reserve), and others. While not everyone praises the book, it’s brought forward the conversation about our AI-driven future.
But, does their argument merit the attention? Is it a good argument? Even more important, what if it’s the wrong conversation? Have they engaged, like a street magician, in misdirection, making us look at one thing when, really, we should be looking elsewhere?
In this series we’ll walk through these questions, examining their assumptions, the evidence they give, and conclusions they draw. When we’re done, we’ll have developed a perspective that — while not satisfying either the Accelerationists or the Doomers — will guide us in making sound judgments about this technology, both its use and its potential.
Next: Are modern AI models “grown” or is something else going on?
