Mind Matters Natural and Artificial Intelligence News and Analysis
abstract-landscape-colorful-art-landscape-with-the-tower-of-babel-in-dramatic-light-art-illustration-digital-art-image-stockpack-adobe-stock
Abstract landscape. Colorful art landscape with the tower of Babel in dramatic light. Art illustration. Digital art image.

The New Tower of Babel

The old ways of arguing and understanding each other are on the decline, if not on life support.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

We all know Babel (no, not the language learning company). It’s in Genesis. The Biblical story about God making so many languages and dialects and (let’s add) opinions that no one could understand each other or effectively communicate. One legacy of the triumph of digital technology and AI in every corner of our existence is that we’ve recreated this Babel. Let me try to unpack this, and bear with me if it seems I’m saying something derogatory about one belief or another — my aim is to avoid that game and try to explain the mechanism, the social and cultural story, by which our new Babel is ascendant, and the old ways of arguing and understanding each other are on the decline, if not on life support.

Start with an oldy but goody: the old war between scientific materialists and folks with traditional religious notions, like immaterial minds (think: souls) given or designed by a god, or more to the point, a Judeo-Christian God. That was an orienting debate for decades, nay, centuries. But we’ve Babel-ed it. We’ve Babel-ed it good. As we’ll see, it’s not just that debate either. More and more, it seems it’s reasoned debate itself.

Some housekeeping, if you’ll oblige. I have many friends and former colleagues who are traditionally religious. One of my former editors is an Orthodox Jew. Other writers and friends are Evangelical Christians. I’ve written for outwardly secular organizations with thinly disguised religious aims, and I’ve written at least a couple articles — long ago — for quasi-religious journals. I was in Paris last year having lunch with a former editor, a secular-ish Jew as far as I can tell but broadly sympathetic to traditional Judeo-Christian values and objectives. He mooted the Tower of Babel idea as a symbol of this age. We went on to talk about LLMs, which he thought would morph into AGI at some point (his take on it confused me coming from him, which I suppose made his Tower of Babel point. I thought he would be sceptical of AGI, or at least the notion that LLMs would acquire general intelligence). Cut to the chase: our new Babel is a symptom of an underlying condition. The condition is simple: technology won. Our new Culture War isn’t a war of ideas but a series of increasingly arbitrary and belligerent political skirmishes. It’s “binary” still — one side against the other — but less concerned with serious thought. It’s what my people believe, my tribe, and what you people — your tribe — does not. Everything is political today, what David Brooks called in his wonderful longform piece in The Atlantic “How America Got Mean” (September, 2023) a “sadistic striving for domination.” That’ll Babel you!

If you are asking politics to be the reigning source of meaning in your life, you are asking more of politics than it can bear. Seeking to escape sadness, loneliness, and anomie through politics serves only to drop you into a world marked by fear and rage, by a sadistic striving for domination. Sure, you’ve left the moral vacuum — but you’ve landed in the pulverizing destructiveness of moral war.

As far as I can tell, the elephant in the room here is the triumph of digital technology and especially AI. There’s nothing evil about technology or AI — it’s the effect our latest totalistic technological existence has had on our philosophical, deliberative, and thoughtful and (yes) religious selves. If it’s evil, we’re evil. It’s just that its success in our Big Tech guise of big centralized data analysis has hastened our own cultural confusion and demise.

AI Was Once an Intellectual Touchstone

Artificial Intelligence used to be a defacto target for the religious right because it clearly represented one side of a well-defined and age-old cherished debate, between the Dan Dennett’s of the world and the C.S. Lewis’s (to bridge a few generations). Between Godless scientific materialists and their silly idea about AI, and soulful religious folks, who know that human minds are God given and therefore inherently special and superior. As far as I can tell, the old religious arguments about AI were both substantive (or phenomenal) and inferential. They were about both the mind or consciousness as a “thing” separate from our brains, and about the cognitive powers and limits of mechanical systems. In both cases, AI was supposed to be woefully inadequate. Not a mind. Not given by God. Silver medal at best.

Today, I think the consciousness debate is still possible, but increasingly the arguments about inferential powers seem to be slipping away. Maybe an AGI will be like philosopher of mind David Chalmer’s philosophical zombie: super smart like a human brain but no lights on inside. Or maybe not. My point is that we’ve already largely abandoned the inferential claim. No one wants to be on the wrong side of history. LLMs are surely mindless but they’re also clearly chipping away at inferential claims.

To wit: warring against “AI” today in the old manner seems fruitless and more than a little stupid — and probably also hypocritical. It’s a bit like warring against cars or X-Ray machines or artificial limbs. AI is part of our day to day existence. This is part of the secret to how the old dichotomies and reductionisms morphed into a political maelstrom and an acquiescence to yesterday’s “enemy,” an enemy that now sits on your phone and is happily ensconced in the family car, as well as your laptop. The appliances in a modern kitchen now likely feature “AI.” Warring against it is going to become a problem.

YARN | He's everywhere. | Spectre (2015) | Video gifs by quotes | 6f2288ad  | 紗

He [it] is everywhere!

Here we see the first pivot toward the New Political. Technology is the ground we’re all standing on. The new war must be against someone or some group (not an idea!) who is screwing something else up, presumably by yacking about issues and venting rage and anger for causes we’re suspicious of or don’t support. The new Babel requires a new enemy. “Mechanism” is too abstract today, and, cynically, debating it gets too few likes online. That tribe over there isn’t abstract—they’re chanting “Death to Israel” on the Columbia campus. Welcome to the new world. Welcome to the new Babel. It’s now almost impossible to speak the same language, because the fragmentation into tribal battles was also occasioned by a loss of a common agreement about determining right and wrong. By the loss of a common language, a lingua franca.

The problem with turning away from the old orienting debates about the nature of the mind or soul, the good life, and the limits of technology and cognitive science and AI is simply that political wars no longer happen explicitly at the level of ideas, but identities and groups, which among other things are great at multiplying beyond reason and control. Here’s an example. When I was in Palo Alto before Covid, there was “a wall of scepticism” — as famed entrepreneur and venture capitalist Marc Andreessen memorably put it — about vaccinations by those left of center (in Palo Alto, this means pretty much everyone). Though Trump ended up broadly supportive of Covid vaccines, the center right and the “MAGA” crowd ended up pushing an anti-vax agenda. The fine folks in Palo Alto then simply caught collective amnesia, and began a vociferous and highly condescending campaign in favor of vaccinations. Free speech got bullied, tempers flared, and the notion that, as Bill Marr put it, there’s such a thing as the science got so roundly politicized that western science itself largely submerged into the murky putrescence of politics gone amuck. To step into this blood sport, Napalm your village mess and attempt to resuscitate talk about — what? — scientific materialism would also get shouted down or ignored. What’s the point?

Here’s what happened. Politics just swallowed religion. It just swallowed science. And it just swallowed you.

Unfortunately, Technology is to Blame.

I sometimes get accused of giving digital technology too much credit (or blame) for Babel-ing our culture. I don’t think that’s true. It’s almost impossible to imagine, in the span of a decade or two, such an about-face in our society if we weren’t now essentially living online and cohabitating with digital technology and the web, and this once suspicious atheistic materialistic idea of “AI.” I don’t have access to a counterfactual of course (wormhole, anyone?), but reflect with me on how much changed so quickly, and let’s reflect on how nearly every upheaval, discussion, trend, and violation of the law is now some major online discussion with tribes throwing feces and spears at each other, amplifying disagreements into yelling matches without a shared notion of truth. Reflect on all that. How can it not be traceable back to the digital technology revolution that has defined the millennium so far?

Telegram Just In, Sir. It says “Duck.”

Last night I watched a documentary on the Watergate Scandal, and the public took to sending telegrams to committee members and other politicians voicing their disgust with Nixon’s prevarications and increasingly obvious obstruction. They demonstrated. They made phone calls. They talked to an eager press. But through all of it, there was a shared notion of truth—even Nixon’s replacement for special council (the first guy was actually demanding the tapes), who was supposed to be supportive of the President soon abandoned him in face of truth and facts—and a shared sense of outrage. In our Babel-world today, we have endless battles vying for truth, to the point where it seems to stop mattering what truth really is. I have a hard time believing the pro-Palestinian Gen Z protestors chanting anti-semitic slogans and pitching tents on the once respected grounds of Columbia University have much of a grasp of the history of that conflict, and have thought very deeply about the sort of values Hamas openly espouses. Hamas isn’t really bullish on free speech, feminism, Jews, Americans, and non-Muslims.

Back to tech, if we had to use Western Union to send hateful telegrams or pay exorbitant long-distance fees to criticize our leaders, the tech wouldn’t seem so larger than life—so part of us—and perhaps we’d be more disposed to Socratic dialogues in classrooms and centuries old discussions about souls and heaven and all the rest. Digital technology by my lights is a smoking gun here. Technology is to blame — Churchill famously quipped that we shape our buildings, then they shape us — but that’s like saying cars are to blame for car crashes, or that suburbs are to blame for middle class boredom. We’re in the car. We’re in the suburb. We’re using “AI” and all the rest. Back to AGI.

AGI Still Gives Religious Folks a Queasy Feeling. Culture Says? Get Over It.

I was on the phone recently with a friend who I know to be a good guy and an Evangelical Christian. He’s a smart, educated guy and is up to speed on computational and AI issues. We were discussing my second book project, and he was making the point that people wouldn’t use LLMs if the hallucination problem was so severe that their responses weren’t generally reliable. Eventually, we made our way to the “scaling hypothesis,” a fancy term for the idea that OpenAI and the rest of Big Tech promote LLMs as a path to AGI. It’s not just Big Tech: AI enthusiasts everywhere frequently if not universally make this claim. For various reasons, I find the claim not believable (yes, I will write about my reasons in another post!). I argued that the “cognitive” architecture of neural nets using transformers did seem to show emergent intelligent properties — somehow “knowing” grammar is a good example — but that the ability of such systems to “think” at the level of propositional thoughts, goals and plans was clearly limited. Something about the approach seems fundamentally wrong, or at least incomplete.

I added in a kind of peroration that if I was fortunate enough to happen upon the formula for true AGI — whatever it is — I’d feel pretty special and grateful that Fortuna shined her light on me as an AI scientist. What’s the point of working on AI if you don’t want it to get smarter, solving problems that our systems today still can’t manage? We don’t see aeronautical engineers working on fuel efficient designs but not too much. “Leave something for tomorrow, Bob. Don’t solve it!” Makes no sense, right? I commented cheekily that I didn’t suppose “I’d go to hell” for scientific innovation. The problem is actually innovating. Inventing. (I should add here that my idea of achieving AGI is sans machine consciousness or even motivations, emotions or desires. AI to me is a game of making more powerful inference.)

Here’s the point. Religious conservatives as a group are vastly more likely to be skeptical of AI futurism, and at least anecdotally I can tell you that, historically, they’ve been rooting for the other team all along — humans, us. Not AGI. That’s broadly my position, too, but it’s not in the Paul Bunyan and his Babe the Blue Ox way. We’re living with our technology, and increasingly it’s doing stuff that we can’t. In a simple sense, it’s a good thing — Bunyan would be awesome with a new Stihl MS 500i chainsaw (I suppose today it’d be an electric model). It’s complicated, of course, because as I’ve said many times before, Dataism as a religion, data-driven centralized AI, the unholy alliance between Big Tech and government, social media dysfunction, and all the rest are not “Paul Bunyan” points. They’re points about stuff “going to the devil,” as Dostoevsky once wrote (his point was about the replacement of the human will with science, a slightly different but related worry).

But here’s again the point: all the “going to the devil” concerns or accusations about digital technology and the web are traceable mostly to advances in AI. The point I’m making is about the powers of AI, not whether it increases suicide rates, but AI drives the web (see my addendum up top). That’s the argument I’m talking about, at least foundationally. And that ground has shifted and our ideas have shifted and our discussions are fragmented where they were once clear lines in the sand. AI was a sideshow before, a fun way to get at different theories of mind and views of science. Now it’s the show.

Let’s take one more pass at the original thesis, that the old ways were I think the good ways, but no matter, because they’re largely gone. They’re gone even among folks who made a living talking about them. They’re disappearing even — it seems to me, I have no polling data — among the Judeo-Christian space. David Brooks, in his piece I mention above, claims that “Evangelicalism used to be a faith; today it’s primarily a political identity.” Sounds bad. Sounds like more Babel is coming. How do you articulate a reasoned position with “political identity” as a starting point?

As might be expected, my friends and colleagues who are right of center have largely turned to Trump as presidential timber, but there’s nothing traditionally religious about Trump, and by my admittedly outside-looking-in lights (I skipped the last two elections) he makes former President Clinton and the Monica Lewinsky scandal seem rather tame: “Oh, it was just that one intern? Gotcha. Just not a celebrity porn star or….”). But delving into politics would certainly blunt my point, that everything is now politics, including philosophy and science, and that this new orientation was created by and is form-fit to the new AI-powered digital online world we all live in. Tribal outrage is the new Bertrand Russell versus C.S. Lewis. Russell would bore everyone to tears—he might actually go camp with the protestors in Columbia, so perhaps no—and Lewis wouldn’t get enough likes if he wasn’t anti-vax and a MAGA supporter, and insisted on talking about the problem of evil—but not about the evil Democrats.

The old dogs wouldn’t understand our world, and would get lost in our Babelly shuffle online. Alan Turing might bone up on a few decades of AI innovations, and understand transformers and LLMs — he was a mathematical genius, after all. But Turing thought a conversational AI would learn like a person did, getting instructed somehow, not by optimizing an objective function with endless (stolen?) data. He thought such an AI would eventually become mind-like, which is why traditional religious thinkers found the idea incompatible with their world view and threatening. Turing, of course, was an atheist, and when he debated the theistic scientist Michael Polanyi, their polite arguments while sometimes technical would not fall prey to political action committee language or a Babel-like mix and match of ideas not thoughtfully considered. No one would gin up outrage and try to cancel Turing, or Polanyi. We used to learn this way, and the culture understood it needed to respect both sides, and listen. It doesn’t work this way anymore, and technology this century played a major role.

Brooks is right. We’re in the moral vacuum of endless political warfare.

Maybe, too, the techno-futurists were right. Maybe we really are losing to the machines. Maybe that’s what our new Tower of Babel world is trying to tell us. What better than endless episodic, emotive “data” — text, images and sound — for our new techno-wonder world. What good is a thoughtful treatise?

Cross-posted at Colligo, Erik’s Substack.


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

The New Tower of Babel