Mind Matters Natural and Artificial Intelligence News and Analysis
party-background-with-lights-confetti-balloons-and-serpentine-stockpack-adobe-stock
Party Background with lights, confetti, balloons and serpentine

Could chatbot goofs be developed into a party game?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Chatbots’ wild ride through attempts at understanding human language are providing lots of fun copy for tech writers. At Futurism, tech writer Noor Al-Sibai offers a gem: Last week, she asked the Gemini bot to tell her who she is married to:

For example, I’m not currently married. But when I asked Gemini, it had a confident answer: my husband was someone named “Ahmad Durak Sibai.”

I’d never heard of such a person, but a little Googling found a lesser-known Syrian painter, born in 1935, who created beautiful cubist-style expressionist paintings and who appears to have passed away in the 1980s. In Gemini’s warped view of reality, our love appears to have transcended the grave.

It wasn’t a one-off hallucination. As the WSJ’s AI editor Ben Fritz discovered, various advanced AI models — he didn’t say which — told him he was married to a tennis influencer, a random Iowan woman, and another writer he’d never met.

“If You Ask AI Who You’re Married To, You May Spit Out Your Coffee,” February 12, 2026

No one seems to have programmed the bot to say, “Look, I have no idea; I don’t get out much. Can I interest you in hockey playoff predictions?”

And then there’s the making up answers thing:

Al-Sibai also looked into that:

Roi Cohen and Konstantin Dobler, a pair of doctoral candidates at Germany’s Hasso Plattner Institut, posit in their recent research that the issue is simple: AI models, like most humans, are reluctant to say “I don’t know” when asked a question whose answer lies outside of their training data. As a result, they make stuff up and confidently pass it off as fact.

The Hasso Plattner researchers say they’ve devised a way to intervene early in the AI training process to teach models about the concept of uncertainty. Using their methodology, models not only can respond with an “IDK,” but also seem to give more accurate answers when they do have the info.

Like with humans, however, the models that Cohen and Dobler taught uncertainty sometimes responded with an IDK even when they did know — the AI version of an insecure schoolchild who claims not to know an answer when called upon in class, even when they do.

“Even the Most Advanced AI Has a Problem: If It Doesn’t Know the Answer, It Makes One Up,” February 12, 2025

You know, if Al-Sibai doesn’t get there first, a smart techie could invent and patent a party game, with accessories, based on bizarre chatbot answers. It might be a fun icebreaker for parties where we don’t really know each other very well but must all pretend we are having a great time. Scratch that; we would have a great time with this successor to venerable party games.


Could chatbot goofs be developed into a party game?