Mind Matters Natural and Artificial Intelligence News and Analysis
chatbot-ai-artificial-intelligence-technology-virtual-conver-1372095333-stockpack-adobestock
Chatbot AI Artificial Intelligence technology virtual conversation assistant support mobile smart phone online application chatting with customer service automatic answering. 3d rendering.
Image Credit: Chaosamran_Studio - Adobe Stock

AI Psychosis and the Need for Human Exceptionalism

I pronounce you husband and chatbot?
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

AI is a tool with certain potentials and limits across various fields, but basic anthropological confusion can do a lot of damage. What happens when AI programs cease to be seen as mere tools, meant to used in limited ways and used wisely, and are considered “persons?” It sounds silly to pose the question, but that’s where we are.

Futurism writer Frank Landymore reports on an Ohio legislative measure to ban human-AI marital unions. The bill must be intended to be preventative, since AI bots and programs aren’t recognized as legal persons (yet), but it speaks to a cultural trend that, if undealt with, could blow out of proportion. Landymore writes,

Popular chatbots are capable of being eerily lifelike, effortlessly playing along with any conversation they’re thrown into. Often, the AI responses are sycophantic, confirming a human’s beliefs no matter how unfounded, and creating an impression that they’re actually alive or intelligent. That makes them quite adept at wrapping lonely sad sacks around their fingers. A recent survey, for example, indicated that nearly a third of US adults said they’ve had an “intimate or romantic” relationship with an AI chatbot.

Landymore goes on to mention “AI psychosis,” a state of delusion that occurs when AI users become subsumed in virtual reality and no longer have a proper grip of what’s real. Marlynn Wei writes in Psychology Today, referring to AI psychosis,

This phenomenon, which is not a clinical diagnosis, has been increasingly reported in the media and on online forums like Reddit, describing cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Most recently, there have been concerns that AI psychosis may be affecting an OpenAI investor.

AI chatbots may inadvertently be reinforcing and amplifying delusional and disorganized thinking, a consequence of unintended agentic misalignment leading to user safety risks.

If this is true, then people with a pre-existing penchant for mental health struggles may be especially vulnerable this kind of AI-induced, delusional mindset.

Some lawmakers are taking action to keep these troubling trends from becoming codified in public policy. It’s easy to see why. When we lose sight of human exceptionalism and devalue true connection, personalized computer algorithms start to seem like a viable option for companionship.


Peter Biles

Editor, Mind Matters News
Peter Biles is the author of several books of fiction, including the story collection Last November. His stories and essays have appeared in The American Spectator, Plough, and RealClearBooks, among many others. He authors a literary Substack blog called Battle the Bard and writes weekly on trending news in technology and culture for Mind Matters.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

AI Psychosis and the Need for Human Exceptionalism