Mind Matters Natural and Artificial Intelligence News and Analysis
stressed-computer-engineer-reading-code-cgi-data-and-making-mistake-while-analyzing-special-effects-charts-while-working-late-at-night-confused-worried-and-anxious-web-developer-discovering-a-hack-stockpack-adobe-stock
Stressed computer engineer reading code, cgi data and making mistake while analyzing special effects charts while working late at night. Confused, worried and anxious web developer discovering a hack
Image Credit: peopleimages.com - Adobe Stock

Heavens! Are You an AI Denialist?

Accusations of “AI denialism” smear thoughtful critics as mentally disordered
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

“Denialism” isn’t the most brutal insult in circulation, but it’s one of the most useful—useful to the person wielding it, anyway. Label somebody a “denialist,” and suddenly you don’t have to grapple with the other person’s views. You just point, declare victory, and move on.

That groan-worthy tactic works because most people don’t know what denialism means. Nobody wants to be a denialist; it sounds like a moral failing.   

AI Critics: Both Grief-Stricken and Wrong

A piece at BigThink.com site grabbed my attention with its title: “The rise of AI denialism.” Penned by Louis Rosenberg, CEO and Chief Scientist at Unanimous AI, the article declares the battle as: “AI denialists” vs. people smart about AI. Rosenberg plays a psycho-semantic game, however, that guarantees the two warring camps will always talk past each other.

Assembling a definition of denialism from the Merriam Webster, Cambridge, and Wikipedia dictionaries:

Denialism is the systematic rejection of established facts, evidence, or consensus, often in favor of unfounded or fabricated claims. It goes beyond simple skepticism by actively refusing reality, even when overwhelming proof exists.

Rosenberg alleges people are engaging in AI denialism by systematically denying the existence, truth, facts, validity or consensus about AI, despite overwhelming proof. The piece names not one such human, but links to an external article that quotes an OpenAI co-founder of ChatGPT who admitted problems with a new version of the bot.

Rosenberg purports to cite pundits saying chatbot outputs are “AI slop,” that “the era of rapid AI advancement is ending,” that “AI scaling has hit the wall,” and the AI boom is just another “tech bubble.” No human is cited for these remarks.

The article’s clever trick is to treat “AI denialism” as a psychological coping mechanism. Rosenberg draws upon the “five stages of grief” framework, commonly described as denial, bargaining, anger, depression, and acceptance. He argues society is “collectively entering the first stage of grief — denial” because people fear losing their “cognitive supremacy” to machines.

Of course, denial as a human reaction exists. People downplay scary realities all the time. But notice the subtle shift. Denialism (rejecting established facts despite strong evidence) became denial (a broad psychological response to discomfort). Those aren’t the same thing. Differing views alone don’t prove one side is psycho.

What Exactly Is Being “Denied”?

If someone said, “AI doesn’t exist,” that absolute statement would be denialism. Other denialism examples would be: “LLMs are fake; nobody uses them,” or “AI systems never improve.”

But that’s not how most AI critics argue. Most skepticism and criticisms point out messy realities:

Disagree with those claims, demolish them if you can. But calling them “denialism” just dodges the real work.  Charging “denialism” prejudices readers against the critics instead of addressing critics’ actual positions. In logic, that’s called “poisoning the well.”

Rhetoric War Befogs the Serious AI Worries

Claiming that an opposing view is denialism starts a rhetoric war. That’s the worst part of the tactic. Instead of exchanging ideas, the discussions become “us vs. them.” Then, because research grants and investment money are at stake, the two sides become political. They quit working together to address issues such as AI ethics and safeguards against AI criminality.  

With apologies to Gresham’s Law: If “denialism” becomes the focus of debate, then petty arguments will drive good arguments out of consideration. That must not happen in this perilous century of AI ascendancy.


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a retiring lawyer, author, and a Fellow of Discovery Institute’s Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. Holding degrees in computer science (UCSD) and law (USD), Richard practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and specialized in writing dispositive motion and appellate briefs. Author or co-author of four books, he has written numerous articles and spoken on subjects including intelligent design, artificial and human intelligence, economics, the Bill of Rights and Christian apologetics. Available now at Amazon is his fifth book, Investigation Defense: What to Do When They Question You (2024).
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Heavens! Are You an AI Denialist?