1960s Chatbot Pioneer Was Right to Worry About What He’d Created
AI “companions” may be dumbing us down intellectually and emotionallyAt ZME Science, Mihai Andrei reminds us of the first chatbot, ELIZA, back in the 1960s. Created by MIT professor Joseph Weizenbaum (1923–2008) between 1964 and 1966, it was fairly simple by today’s standards:
“I feel rotten.”
“Why do you feel rotten?”
And so forth. But the amazing fact is that at least some users thought that they were talking to an intelligent agent. Weizenbaum became, understandably, concerned about that fact:
Code recovery
The occasion of Andrei’s story is the recovery of the code used to produce Eliza, resulting in a paper by Rupert Lane et al. It’s a complex story about the search through the archives and junkpiles of the early years of software.
He concludes:
ELIZA is a piece of computation history. She existed before the term “chatbot” existed. But this is a bit more than just a piece of history. She embodied ideas (symbolic reasoning, interactive computing, psychological modeling) that laid the groundwork for modern AI. And she exposed the human impulse to project feelings onto machines. Weizenbaum himself became a vocal critic of such projections, warning of “the computer as a psychological tool.”
Now more than ever, with AI truly taking off, it’s time to think about what ELIZA is. She seems very simple now but was enthralling 60 years ago. She seemed human but was, of course, just an algorithm. What does that say about our current AI systems?
“A Team of Researchers Brought the World’s First Chatbot Back to Life After 60 Years,” May 22, 2025
Pomona College business prof Gary Smith is in a good position to answer that. As he said here earlier today, chatbots may now be contributing to a dumbing down of the population in both intellectual and emotional ways:
Students have learned that they can use the text-generating prowess of LLMs to write papers and answer homework and test questions. Never mind that the papers may be marred by untruths supported by fictitious references and that the homework and test answers are sometimes incorrect. It is still much easier to rely on an LLM than to attend class, read books, and do assignments. Teachers, too, are now using LLMs to construct their syllabi, lectures, and assignments and do their grading for them. We are rushing toward a world in which schools are little more than teacher chatbots interacting with student chatbots.
…
The use of Facebook, Instagram, and other social media is highly addictive and strongly linked to a variety of mental health issues, including self-esteem, body image, isolation, depression, unwanted advances, bullying, and addiction. A Facebook whistleblower testified that the company’s executives touted Facebook as building community even while their own internal research showed strong links between social media usage and negative mental health outcomes, particularly for teenage girls.
Social media is polluted by bots pretending to be humans in order to promote products or spread disinformation. The next step is that people choose to interact with AI friends — personalized LLM bots that are unapologetically bots. Unlike real people, who are so often flawed, grumpy, and disappointing, bots are always there — relentlessly cheerful and ready to chat and comfort and be romantic if requested.
“LLMs Are Bad at Good Things, Good at Bad Things”
Weizenbaum was right to worry. Not all drugs are chemicals.
You may also wish to read: LLMs are bad at good things, good at bad things. LLMs may well become smarter than humans in the near future but not because these chatbots are becoming more intelligent. As people become attached to and dependent on their AI friends, they become less interested in their fellow humans. (Gary Smith)
Here’s the paper: Rupert Lane et al, ELIZA Reanimated: The world’s first chatbot restored on the world’s first time sharing system, arXiv (2025). DOI: 10.48550/arxiv.2501.06707