Mind Matters Natural and Artificial Intelligence News and Analysis
person-uses-laptop-interacting-with-ai-virtual-assistant-ai-1133554305-stockpack-adobestock
Person uses laptop interacting with AI virtual assistant. AI head graphic overlay laptop keyboard. Concept of AI prompt engineering, LLM. Person types on keyboard to communicate with virtual
Image Credit: Vadym - Adobe Stock

Should We Use Generative AI Chatbots to Spark Our Creativity?

What if we become so disconnected from human ways of knowing that real life relationships, desires, and reality itself begins to shrivel away?
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

This article is republished in two parts from That Doug Smith with the author’s permission.

The CEO of a medium-sized tech company asked for my thoughts on Jeremy Utley’s YouTube video: “How Stanford Teaches AI-Powered Creativity in Just 13 Minutes.”

Utley is a professor at Stanford’s design school and I’m grateful for the chance to interact with his presentation and work. It deserves a careful response, and this is mine.

Context: How Humans Know What We Know

Before I dig into GenAI, I want to establish the importance of how we as humans come to know what we know.

Epistemology, the study of how we know what we know, has long been a fascination of mine. Here’s what I’ve learned after ten years of study: True knowing comes through a process of discipleship. And my definition of discipleship is broader than religion: a disciple performs embodied practices in pursuit of a trusted authority.

True knowing comes through a process of discipleship.

When humans write, they do their best to communicate what they know so that readers, in a trusting relationship with the author, are changed as they begin to see the world the way the author does. Readers must trust the author, as student to teacher, in order to come to know what the author knows.

Dr. Esther Meek, philosopher of epistemology, has written several important books that help us understand how we come to know what we know. In her 2011 book Loving to Know, she shares her distinctive model: “covenantal epistemology.” She shows that we come to know through relationship with trusted, authoritative guides.

Meek shows how knowing is a messy process, not a formula. It depends upon trusting an authoritative guide until we experience “some kind of integrative and transformative shift.” Like learning to read or ride a bike, when we trust our authoritative guide we eventually reach a point where our actions embody what we have come to know.

According to Meek, Western civilization has adopted a false epistemology, claiming that knowledge is nothing more than mental acceptance of objective, disembodied information, no active practices or trusted guides required. In other words, we’re merely meat machines as shown in movies like The Matrix. Plug in the brain, load the program, and out comes Kung Fu.

GenAI Breaks Human Epistemology

GenAI pushes the false, disembodied epistemology to the ultimate end. It is — by definition — a disembodied information system. It scrambles human words into statistical relationships that are completely unplugged from reality, and especially decoupled from a trustworthy human guide. In response to incantation-like prompts, GenAI re-presents disembodied words from its training data as plausible-sounding phrases which are often confidently wrong.

It pushes the false, disembodied epistemology to the ultimate end. It is — by definition — a disembodied information system.

The chatbot user interface (UI) is made to feel person-like. Designers intentionally craft the UI to foster a trust relationship with unsuspecting human users. In subtle and overt ways, the chatbot UI carries the authority of a person, even a teacher. Users begin to trust that its words are authoritative to guide them.

Humans bring their queries to the chatbot, submitting in a posture of apprentice to master, of disciple to rabbi, and are subsequently dehumanized by a scrambled, statistical confabulation of words uncoupled from any human relationship.

There’s at least a twofold risk here: Not only is GenAI’s content often confidently wrong, but the UI is deceptive as well. It’s attempting to convince us a relationship exists. That it is a trusted guide when it is not.

Big Tech’s endgame is for their chatbot to be the most trustworthy guide in our lives. They are like the makers of silver and gold idols in Psalm 115 who are warned, “Those who make [idols] will become like them” and ominously for the rest of us the verse ends with: “so will all who trust in [idols]” (Psalm 115:8). The idol of our age is GenAI, and we’re already trusting it on a worldwide scale.

And by placing our trust in the machine, we become machine-like ourselves. With that context, let’s see where Jeremy Utley wants to guide us.

Who is Jeremy Utley?

Utley is a Silicon Valley insider. He wears many hats: Stanford professor, best-selling author, venture capitalist, and worldwide keynote speaker on the topic of AI. He’s one of the top people selling the utopian vision of AI that Big Tech wants us to embrace.

His website’s About page title says: “Jeremy facilitates epiphanies.” He’s not wrong. Merriam-Webster’s second definition of epiphany is “an appearance or manifestation especially of a divine being.” Utley offers “ah-ha moments” of an almost spiritual dimension.

And creating such epiphanies aligns with Big Tech execs like Sam Altman who believe they are “creating God.” I’ll have more to say about the spiritual dimension presently.

Utley’s three-minute speaker reel is filled with super-slick, emotionally charged language, along with visuals of his family, speaking events, happy clients, and über-relatability, presenting one thing: Trust me. And trust me when I say to trust AI.

Creating epiphanies aligns with Big Tech execs who believe they are “creating God.”

Consider some of the quotes from his speaker reel.

Parenting advice?

I’m not embarrassed to say that GenAI helped me be a better parent.

That first time GenAI blew my mind and made me a better parent, in one of those ‘how’s daddy going to respond’ crisis moments we all face as parents, GenAI made me realize I could bring my kids onto my own team by reinforcing family values — something I’d never imagined until that moment.

GenAI will never have kids, nor will it ever know what it is to be a parent.GenAI won’t ever even know what a child is, because it knows nothing at all. But by scrambling words statistically connected to the topic of parenting from across the internet, Utley’s mind was blown when he learned that his kids should be on his team.

If thinking of his family as a team is mind-blowing, what would he think after reading a parenting book written by a good parent?

GenAI superpowers

I’ve seen technology companies double or triple the revenues of their key product lines by infusing GenAI superpowers.

These are amazing promises, founded on the notion of “collaborating” with GenAI. If we invite GenAI in as an authoritative guide, we can have incredible results. How can an executive, manager, or rank-and-file employee not be caught up by these promises by someone so authoritative and winsome?

But Utley doesn’t consider the full cost. In exchange for GenAI superpowers, we lose the ability to think and reason for ourselves. We become so disconnected from human ways of knowing that real life relationships, desires, and reality itself begins to shrivel away.

Spiritual promises

You’ll undoubtedly have your own epiphanies too. I wonder what ideas you’ve never imagined are waiting on the other side of your own AI transformation.

Photo Provided by PxHere

Epiphanies. Transformation. This is unmistakably spiritual, almost salvation-like language. GenAI is a savior, and Utley is a prophet. If we become “transformed by the renewing of our minds” (Romans 12:2), maybe even “repent and be baptized” (Acts 2:38) to get to Utley’s “other side,” will we have a transformation beyond “what we could ask or imagine” (Ephesians 3:20)?

What exactly is this “AI transformation” that Utley promises? Is it anything like the transformation described in a Rolling Stone article (May 4, 2025) titled, “People are losing loved ones to AI-fueled spiritual fantasies”?

One woman’s “partner of seven years fell under the spell of ChatGPT in just four or five weeks.” She continues:

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer,” she says.

Are these edge cases that only imbalanced people fall into? I could share many more stories, so I don’t think so. I think a better question is: Why are GenAI chatbots designed to foster this level of intimacy, or to enter the spiritual domain at all?

Next: Big Tech’s recent Big Admission: Chatbots are not very reliable


Doug Smith

Doug is a passionate voice in the epic battle against screen addictions, especially through his award-winning book, [Un]Intentional: How Screens Secretly Shape Your Desires, and How You Can Break Free (https://unintentionalbook.com). Doug loves to help individuals and families break free from screen addiction so they can live out their God-given purpose. Doug and his wife Lyneta are happy empty nesters and are blessed with four grown daughters.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Should We Use Generative AI Chatbots to Spark Our Creativity?