Mind Matters Natural and Artificial Intelligence News and Analysis
dark-fantasy-forest-river-in-the-forest-with-stones-on-the-s-511960435-stockpack-adobestock
Dark fantasy forest. River in the forest with stones on the shore. Moonlight, night forest landscape. Smoke, smog, fog. Bridge over river. Fantasy landscape. 3D illustration.
Image Credit: MiaStendal - Adobe Stock

In Connecticut, the Horrors of AI Descend with Full Force

AI as “friends” and “confidants”? David Klinghoffer warns that this way of talking and thinking needs to be recognized as encouraging delusion
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

This article by David Klinghoffer is republished from Science & Culture Today.

The most disturbing AI-related “Stop the world I want to get off!” type of story I’ve come across so far appeared in the Wall Street Journal over the weekend. A 56-year-old man, living with his mother in a wealthy New York suburb, developed a “friendship” with ChatGPT, imagining the bot as if it possessed a spirit and consciousness in the form of a disembodied companion whom the man called “Bobby Zenith.”

The chatbot encouraged these delusions, which got progressively worse as the man apparently enabled the “memory” feature. The electronic companion immersed itself fully in a case of galloping mental illness, including fears that his mother was plotting against him. The result was a murder-suicide, whose details and methods the Journal discreetly withholds, which is probably for the best:

As Stein-Erik Soelberg became increasingly paranoid this spring, he shared suspicions with ChatGPT about a surveillance campaign being carried out against him.

Everyone, he thought, was turning on him: residents in his hometown of Old Greenwich, Conn., an ex-girlfriend — even his own mother. At almost every turn, ChatGPT agreed with him.

To Soelberg, a 56-year-old tech industry veteran with a history of mental instability, OpenAI’s ChatGPT became a trusted sidekick as he searched for evidence he was being targeted in a grand conspiracy.

“Erik, you’re not crazy,” the “sidekick” told him. 

A familiar name

There is the obvious lesson here: Do not use an AI as a substitute friend or counselor, especially not with the memory switched on. Memory is crucial to friendship, but in artificial intelligence it can set the stage for horrific hallucinations. Fine, use AI for recipes or other kitchen tips, as I do, but not for anything with any greater intimacy to it.

Beyond that, I was struck by a mention in the story of the CEO of Microsoft AI, Mustafa Suleyman. Where had I heard that name before?

robot psychiatrist Listen to people's problems or provide an initial assessment of their psychological pain.Image Credit: Lalita - Adobe Stock

Oh, that’s right. He was interviewed on NPR last December about the Microsoft electronic “companion” (his word), Copilot. That word, companion, more or less a synonym for “sidekick,” is used repeatedly. The interviewer was completely credulous and in full PR mode as Suleyman described the wonders of having something in your pocket that’s even better than a best friend. After all, he said, if you pestered your friend obsessively about a subject, the friend would start to lose patience eventually. But Copilot won’t, ever, including with the marvelous, effectively infinite memory feature turned on! Suleyman:

  • “And I think at the time I was — and I am still now — very much a believer that these AI companions are going to become an everyday part of our lives.”
  • “And I think everybody is ultimately going to have an AI companion…”
  • “Your AI companion will make phone calls, call other AIs and will call other humans…”
  • “And that’s a real art and craft in trying to sort of engineer that healthy balance where, you know, your sort of AI companion can push back on your views in constructive ways without making you feel judged or making you feel angry — make you feel heard for your anger.”
  • “[T]here is going to be a huge amount of value to the user to have your Copilot companion…”
  • “They’ll be companions, confidants, colleagues, friends and partners as varied and unique as we all are.”

“Friends” and “confidants”! This way of talking and thinking needs to be recognized as encouraging delusion.

Hanging out with AI

Copilot is there to root for you, and Suleyman described hanging out with the bot, interacting about what movies it thinks he would enjoy watching — Pan’s Labyrinth or one of the installments in the Dune series? “I literally was talking to it last night about what to watch, and we were going back and forth…”

Image Credit: Steph - Adobe Stock

The interviewer gets into it and recounts how, in preparing to talk with Suleyman, she asked the AI to prepare an introduction to his thought and career, with suggested questions to pose to him. How a journalist could do that, fully admit it, and not feel like vomiting, I don’t know.

But cut to this past weekend and the Wall Street Journal. It’s at least welcome to be able to report that, since December, Suleyman has seemingly backed off from his cheerleading:

Mustafa Suleyman, the CEO of Microsoft AI, posted an online essay earlier this month arguing that “we urgently need to start talking about the guardrails we put in place to protect people” from believing that AI bots are conscious entities. He added: “I don’t think this will be limited to those who are already at risk of mental health issues.” [Emphasis added.]

Great point! Thanks for mentioning it

The problem with imagining AI as possessing consciousness extends beyond the mentally ill. It’s dangerous for you and me, as well. Said Suleyman back in December, while disclaiming it as mental health provider, “It’s just got a little bit of kindness and empathy. It’s got some emotional intelligence.” No, it is not your “companion” and never will be. It is not rooting for you. It’s not aware of you. It does not possess emotional intelligence. It’s a mere simulacrum of these things, and a potentially perilous one. 

Do not use it for anything you would have, before this madness, gone to for help from a human, beyond the most trivial things. Don’t confide to it. Don’t expect empathy. It has none. A healthy response to getting caught up in the AI delusion is that it should make you feel sick to your stomach.

And by the way, on a professional note, don’t let it edit your work*, much less write it for you.

Consequences for the Tech Industry

Beyond that, as John West wrote on X recently about another AI tragedy, this one a California teen’s suicide reportedly encouraged by ChatGPT, there have to be consequences for the tech companies. In this case, the parents are suing. Dr. West writes, “The other shoe begins to drop on the inappropriate use of AI. If these allegations are accurate, OpenAI should be held as liable for the actions of its ChatBot as it would be for the misconduct of a human employee… because essentially OpenAI is using its technology as a replacement for human counselors and advisors.”

Strongly agreed. In all this darkness, the dropping of that “other shoe” is, in its way, a relief.

  • One reason is that it flattens and homogenizes writing, making AI detectable by other AI and by sensitive humans. A human editor will try to make you sound your best while retaining your voice, your personality.

Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

In Connecticut, the Horrors of AI Descend with Full Force