LLMs Are Bad at Good Things, Good at Bad Things
LLMs may well become smarter than humans in the near future but not because these chatbots are becoming more intelligentIn 2011, Jeff Hammerbacher, an early Facebook employee, lamented that, “The best minds of my generation are thinking about how to make people click ads. That sucks.” It is now even worse, much worse.
It is increasingly clear that ChatGPT and other large language models (LLMs) will not boost worker productivity enough to justify the massive valuations they claim. LLMs are too unreliable to be trusted in situations where wrong answers result in substantial costs; for example, medical, legal, financial, athletic, business, and political advice.
The fundamental problem is that LLMs are dumb. Not knowing how words relate to the real world, they cannot judge the reliability of conflicting assertions and are ill-equipped to seek relevant details or gauge uncertainty.
On the other hand, LLMs may be able to generate unseemly profits in the same way that tobacco and pharmaceutical companies profited from peddling addictive cigarettes and opioids.
Undermining education
Students have learned that they can use the text-generating prowess of LLMs to write papers and answer homework and test questions. Never mind that the papers may be marred by untruths supported by fictitious references and that the homework and test answers are sometimes incorrect. It is still much easier to rely on an LLM than to attend class, read books, and do assignments. Teachers, too, are now using LLMs to construct their syllabi, lectures, and assignments and do their grading for them. We are rushing toward a world in which schools are little more than teacher chatbots interacting with student chatbots.
At what cost? Schools are supposed to develop critical thinking and communication skills. The reason that students are assigned essays on literature, history, economics, and more, is not because their future jobs will require them to regurgitate factoids about Jane Austen, the French Revolution, or monetary policy but to exercise their brains in order to develop critical thinking and communication skills.

The takeover of education by LLMs will thoroughly undermine those goals. Cutting-and-pasting will replace thinking and writing. The creators of LLMs surely know this yet they pretend it is not so.
OpenAI’s Sam Altman has argued that “our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need” and that “the difference between classroom education and one-on-one tutoring is like two standard deviations – unbelievable difference.” One-on-one tutoring is great but Altman very well knows that this is not what he is selling.
For example, Open AI recently offered students two months of free access to ChatGPT Plus, between March 31, 2025, and May 31, 2025. If the intended use was tutoring, the offer would have begun earlier in the year, when classes started. Instead, the offer began when papers and final exams are due, with the obvious intention of hooking students on using ChatGPT to write their papers and take their tests. Other LLM companies made similar offers, with xAI explicitly promoting two-months’ free access to SuperGrok, wishing the students well with “Good luck on your final exams.”
LLM companies will reap profits while students addicted to LLMs will not develop critical thinking and communication skills. LLMs may well become smarter than humans in the near future but not because LLMs are becoming more intelligent.
The replacement of human friends with fake friends
The use of Facebook, Instagram, and other social media is highly addictive and strongly linked to a variety of mental health issues, including self-esteem, body image, isolation, depression, unwanted advances, bullying, and addiction. A Facebook whistleblower testified that the company’s executives touted Facebook as building community even while their own internal research showed strong links between social media usage and negative mental health outcomes, particularly for teenage girls.
Social media is polluted by bots pretending to be humans in order to promote products or spread disinformation. The next step is that people choose to interact with AI friends — personalized LLM bots that are unapologetically bots. Unlike real people, who are so often flawed, grumpy, and disappointing, bots are always there — relentlessly cheerful and ready to chat and comfort and be romantic if requested.
Altman has praised AI friends and lauded the fact that many people “don’t really make life decisions without asking ChatGPT what they should do.” Ouch! Unless the answer is obvious (and, sometimes, even when it is obvious), LLMs know nothing of the real world and should not be trusted to offer advice based on statistical text patterns. Their advice may not be only ill-informed but catastrophic.
For example, a Greek woman filed for divorce because a ChatGPT reading of a photo of coffee grounds in her husband’s cup indicated that he was unfaithful. It has also been reported that LLMs have encouraged sexual promiscuity, self-harm, violence, and suicide. No doubt, thousands (millions?) of others have made dumb decisions because a dumb LLM told them to.
Personalized AI friends are not only dangerous but far more addictive than social media. Even worse, as people become attached to and dependent on their AI friends, they become less interested in their fellow humans. Some addicts spend much of their waking hours chatting, confiding, and flirting with LLMs.
A few months ago, Elon Musk argued that a disinterest in the well-being of others was a virtue, not a vice: “The fundamental weakness of Western civilization is empathy.” To the contrary, a fundamental weakness of Western civilization is greed.
Profits over probity
The barefaced denials by LLM promoters remind me of tobacco company executives denying that smoking is unhealthy or addictive, despite plenty of evidence to the contrary. Ditto for companies peddling opioids.
LLM hypesters surely know that they are enriching themselves by capitalizing on unhealthy addictions. Shame on them.