Mind Matters Natural and Artificial Intelligence News and Analysis
ai-chatbot-intelligent-digital-customer-service-application-concept-computer-mobile-application-uses-artificial-intelligence-chatbots-automatically-respond-online-messages-to-help-customers-instantly-stockpack-adobe-stock
AI Chatbot intelligent digital customer service application concept, computer mobile application uses artificial intelligence chatbots automatically respond online messages to help customers instantly
Image Credit: Thapana_Studio - Adobe Stock

What happens when people uncritically use AI-generated content?

Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

At his Substack, AI analyst Gary Marcus notes the growing popularity of the term “botshit” to describe the result. Yes, it’s a vulgarity but the frustration is understandable:

Defined as “the human use of untruthful LLM-generated content,” it was the subject of a recent research paper. From the Abstract:

Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by predicting responses rather than knowing the meaning of their responses. In other words, chatbots can produce coherent-sounding but inaccurate or fabricated content, referred to as hallucinations. When humans uncritically use this untruthful content, it becomes what we call botshit. This article focuses on how to use chatbots for content generation work while mitigating the epistemic (i.e., the process of producing knowledge) risks associated with botshit.

Timothy R. Hannigan, Ian P. McCarthy, Andra Spicer, Beware of botshit: How to manage the epistemic risks of generative chatbots, Business Horizons, Volume 67, Issue 5, 2024, Pages 471-486, ISSN 0007-6813, https://doi.org/10.1016/j.bushor.2024.03.001.

Marcus offers some examples, including:

Item 2: Six weeks ago I was railing about lawyers submitting briefs with hallucinated cases.

Then things got worse: …

Item 3. Of course it’s not just fake law. Yesterday Axios (pretty pro-AI on the whole) reported that much-venerated o3 hallucinated up a too-plausible looking blend of truth and bullshit in a financial report…

(The bots aren’t thinking when they hallucinate. They produce nonsense automatically when they don’t have facts.)

The term botshit may be derived from enshittification, which Gary Smith discussed here at Mind Matters News last year: The Flea Market of the Internet: Breaking the addiction When, after a bad experience, I called Amazon the “Walmart of the Internet,” a friend pointed out that Amazon is, in fact, much worse than Walmart. Internet-based businesses tend to follow a life cycle in which quality deteriorates over time. Writer Cory Doctorow calls the process “enshittification.”

The AI revolution is not turning out as boosters hoped. But it is probably turning out the only way it could.


Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

What happens when people uncritically use AI-generated content?