
AI-Lawyers Will Have Fools for Clients
When tech prophets promise everything—except accuracy.Lazy lawyers have no doubt filed hallucinated content orders of magnitude more often.
Read More ›

Lazy lawyers have no doubt filed hallucinated content orders of magnitude more often.
Read More ›
Even though the users knew they were interacting with a computer, many were convinced that the program had human-like intelligence and emotions and they happily shared their deepest feelings and most closely held secrets.
Read More ›
Post-training might stabilize the responses to specific inquiries but then the bot can just go off the rails again, as my tests showed.
Read More ›
The data deluge exponentially increases the number of coincidental, useless statistical patterns — so the probability of useful patterns approaches zero.
Read More ›
OpenAI needs to show that ChatGPT is more than just the first publicly available LLM. It has not done that and maybe never will.
Read More ›
When considering a mortgage, the correct comparison is not total payments but the return on the borrowed money versus the loan’s annual percentage rate.
Read More ›
A real-world example of such struggles is the poor performance, in general, of AI-powered mutual funds.
Read More ›
Bigger data centers are not going to change the harsh reality that LLMs are prone to generating unreliable blather.
Read More ›
Claims that an LLM is intelligent don’t square with the fact that it may answer a question one way and, a few seconds later, simply contradict itself.
Read More ›
The conventional belief that a sound investment portfolio should be 60% stocks and 40% bonds does not stand up to careful scrutiny.
Read More ›
Human trainers can nudge LLMs to give good answers for specific questions but this training doesn’t prepare them for subtle variations.
Read More ›
Social media addictions clearly cause harm among minors but the tech giants who profit by them won’t regulate themselves in any meaningful way.
Read More ›
The fundamental problem remains; models like GPT-5 are hobbled by their inherent inability to relate the text they input and output to the real world.
Read More ›
The test also shows GPT 5.0’s inclination to praise a user’s acuity, whether the user’s comment is correct or incorrect, intelligent of dumb.
Read More ›
In an era where politicians, celebrities, and businesses can get away with blatant untruths with little or no consequence, will the same be true of LLMs?
Read More ›
It’s no mystery why LLMs aren’t intelligent in any meaningful way. The real mystery is why so many otherwise intelligent people still take the claims seriously.
Read More ›
LLMs excel at simple coding tasks but are still too unreliable to use without extensive human supervision on complex tasks where mistakes are expensive.
Read More ›
Many Sci Foo attendees were excited by LLMs in education but I fear that — when the way they are used is considered — they will increase social inequality.
Read More ›
The job market for recent college grads may be warning us that participation trophies for college and ChatGPT use have long-run costs.
Read More ›
It is increasingly recognized that, without extensive post-training, their authoritative answers are often remarkably bad.
Read More ›