Big Tech’s Recent Big Admission: Chatbots Are Not Very Reliable
Here in Part 2, I discuss Stanford prof Jeremy Utley’s hype video in the context of that and other sobering factsThis article is republished in two parts from That Doug Smith with the author’s permission. Part 1 is here.
Chatbot maker OpenAI had to roll back a recent release of ChatGPT-4o because customers were creeped out by its sycophancy. They went too far in their persuasive design. But it’s okay, they assured us. They rolled it back, and they are “revising how we collect and incorporate feedback to heavily weight long-term user satisfaction and we’re introducing more personalization features.”
They want us to trust them because they are committed to us being long-term users. Isn’t their transparency wonderful?
As we saw yesterday, Stanford design prof Jeremy Utley’s obviously spiritual language, excitement, and enthusiasm as he promises “your own AI transformation” raises alarm bells. I think I’d hear them whether or not I was a Christian. Why would we trust Big Tech products to shape us spiritually?
Bought and paid for
Stanford is heavily endowed by Silicon Valley, with huge investments from Google, Mark Andreesen, and other AI visionaries. The university’s unique link with Big Tech makes it obvious why they’d want to enthusiastically promote the unbridled embrace of chatbots (large language models) and other GenAI.
And apart from Utley’s Stanford connections, his venture capital firm, international speaking schedule, and large public platform suggest that he’s well compensated in money and fame for his role as an evangelist for the industry.
With that context in mind, let’s consider Utley’s presentation about using AI to enhance our creativity. I’m going to interact with a few of the key quotes you will see here.
Winston Churchill
I’ve always been jealous of Winston Churchill.
Utley spends almost 10% of the 13-minute presentation on his intro, starting with some impressions of Gary Oldman’s outstanding performance in the movie Darkest Hour. His story is compelling, and is designed to hook the viewer with a promise: You can be just as powerful as Churchill if you take Utley’s advice.
But this makes me wonder: Did Utley dictate the script of this presentation from his bathtub? By passing a few prompts to GenAI?
More importantly, what level of trust would we have to offer Big Tech to let their bots choose our words for us? Should GenAI be a trusted assistant with world-changing speechwriting powers?
Given that the first 10% of the presentation is an emotionally rich story that draws on images of a legendary leader, we know we’re not watching objective, dispassionate training. This is straight-up advocacy. Even calling it propaganda isn’t too strong.
Canonical book
To me, the fact that I wrote the canonical book on idea generation just prior to AI is like writing the best book about retail just before the internet.
This next part of the video — more than 10% — introduces Utley himself. He makes quite a claim here, that his book is “the canonical book on idea generation.” No lack of self-esteem there. But then he parlays that into the claim that GenAI makes even his important book obsolete.
Again, this is credibility forming, but with exaggerated marketing hype-filled language. It is not objective, scientific, reasoned, rational-type arguments. Red flags go off for me here.
Do not ask GenAI, Let it ask you
[To the chatbot]: I want to ask how I should answer this question. What’s the best way of framing that question to an AI?
This seems to be the heart of Utley’s advice. And to me, it’s chilling in its implications.
He wants us to ask AI how to use AI. Almost as a meta-guide, a guide to how to be guided by a chatbot. To go beyond the risky move of just asking a question to asking what questions we should ask puts us in a position of submission to GenAI:.
- We are the student; GenAI is the teacher.
- We are the padawan; GenAI is the Jedi.
- We are the disciple; GenAI is the prophet.
Utley advocates that we race to intimacy with AI just as Tristan Harris and Aza Raskin warned us about in their important “AI Dilemma” presentation a couple of years ago.
We only ask important questions of people we trust, and as trust grows, intimacy follows. When we need an answer, when we want to know, we confer authority on the person we trust enough to query. When we ask a chatbot, it feels personal to us, and intimacy blossoms.
The more we trust the chatbot, the more the McLuhanian transformation takes place: our minds are extended into the chatbot, while our own mental capabilities are amputated, and we are numbed to the process.
Because we only ask important questions of people we trust, and as trust grows, intimacy follows.
But GenAI is not worthy of our trust, nor are the Big Tech behemoths who are pushing them everywhere.

Confidently wrong
Consider how GenAI is often confidently wrong (putting aside the fact that GenAI has no grounding to reality, so it has no idea whether anything it says is right).
We’re constantly told that GenAI is getting exponentially better, and it’s just a matter of time before hallucinations are solved. But just this month (May 2025), Gary Marcus documented several egregious, yet simple errors that show the need for constant vigilance as we use GenAI. OpenAI’s own recent documents show that their latest models are hallucinating more than the previous ones, not less, with hallucination rates from 16%–79%. So the claims that scale would make everything better aren’t working out.
Why are they doing it?
Why would Big Tech create a system that pretends to be a personal agent when it’s not, that pretends to care when it doesn’t, that is always confident even when wrong, but that quickly responds to everything and is available everywhere? Because they want us to build an intimate relationship that turns into full-on dependency. It’s for their good, not ours.
But why? Because of all the hype-filled gains we’re supposed to realize. Outsource our minds to Big Tech, and they’ll give us back all of this incredible efficiency. But back to the video:
Adam, the “Back-Country” National Park Ranger, and 20 Years to 45 Minutes
The National Park Service is estimating that the tool that Adam built in 45 minutes is going to save the service 7,000 days of human labor this year. That’s the kind of impact that normal professionals can have, even without any technical ability, if only they’re given very basic foundational training.
This is an incredible story. What manager can resist the promise of that kind of productivity gain? Even if he’s half wrong, or 10x wrong? Sign me up!
But there are so many problems with this. First, when Utley documented this story on his blog in “The Story of An Unlikely AI Hero,” he didn’t make the productivity claim under the authority of the National Park Service. He said,
I did a quick, real-time back-of-the-envelope calculation. “If this tool saves just one or two days per request across the parks in the system, that’s over 7,000 days of labor saved annually.”
Somewhere between telling the story on his blog and retelling the same story in the video, the claim that they’re saving millions of dollars a year because one “back-country park ranger” spent 45 minutes with a chatbot. The “back-of-the-envelope” calculation became a US federal government endorsement.
When people wonder why I say “propaganda,” this is what I’m talking about.
Here’s some analysis, not hype
The claim seems to be that across all the national parks, rangers have to make about 3,500 requests for materials that require an average of two days each to prepare the paperwork. So now that Adam “built” a “tool” with ChatGPT, those documents take literally no time to create? And it works uniformly across every request, at every park, for every ranger? That seems like a huge back-of-the-napkin exaggeration.

What is the value of these AI-generated documents if they can be so fully automated? Who is reading them? What are they reading? Are they okay with an average 33%+ error rate (from the OpenAI doc shared earlier)? Is it just that a document with certain words has to show up in some government inbox, then some other agency cuts checks? Why not spend another 45 minutes writing a tool for that agency to read the AI-generated documents? Then, why not cut out both tools and just let rangers order what they want from Amazon with an unlimited federal credit card?
To me, selling AI with such exaggerated claims is a “gold rush” mentality. Stake your claim, get rich quick, and ignore the collateral damage. We like to think we’re different from people in the 19th century who risked life and limb and abused people and animals to cross the Yukon in search of gold. But gold rush messaging still tempts us.
Utley later claims, “AI makes people 25% faster [with] 40% better quality.” No sources are offered, no industries, no context or relevance. These numbers are more propaganda like his back-of-the-napkin-turned federal case. We’re just to trust his authority as a Stanford professor.
These claims are a sandy foundation on which to build trust. I wouldn’t do what he wants me to do based on these shaky promises.
Utley’s grand finale — and my mistrust
The only correct answer to the question how do you use AI? I don’t. I don’t use AI; I work with it. When you start working with AI, it’ll change everything.
This is the transformation he talked about.
I don’t trust Big Tech to change everything into their image. They have too many perverse incentives to make this work for them, but not for us. They broke the world’s trust with social media. They’ve turned hundreds of millions of people into distracted, compulsive users of their products using deception and powerfully dehumanizing strategies. And they’re taking those same strategies to the next level in their race to world domination through GenAI.
Utley, as an evangelist, uses the same hype as the industry in general, making huge claims of world-changing power if we only surrender ourselves to GenAI as our authenticated, trusted guide.
I don’t trust Big Tech to change everything into their image.
So what do I recommend instead? That’s out of this article’s scope. My quick encouragement: meditate on the epistemological considerations above, and avoid being captivated by Big Tech promises as you consider if/where GenAI fits into your workflow.
Here’s Part 1: Should we use generative AI chatbots to spark our creativity? What if we become so disconnected from human ways of knowing that real life relationships, desires, and reality itself begins to shrivel away? Jeremy Utley uses unmistakably spiritual, almost salvation-like language: epiphanies, transformation. GenAI is a savior, and Utley is a prophet…