Mind Matters Natural and Artificial Intelligence News and Analysis
ai-interface-showing-prompt-error-warning-and-system-alert-a-1532854040-stockpack-adobestock
AI interface showing prompt error warning and system alert. AI prompt failure can lead to incorrect output or hallucination. Managing AI prompt error is crucial in safe AI deployment. Muxer
Image Credit: InfiniteFlow - Adobe Stock

Will AI End Deep Thinking?

AI usage can prevent older people from developing critical thinking skills.
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Many analysts have been saying for years that generative AI usage can prevent young people from developing critical thinking skills. Recently, however, attention has turned to older workers.

One study surveyed 200 various owners, founders, CEOs, and other titans of industry and found that 62% of the respondents are using AI to make “most decisions.” Furthermore, “140 of the titans reported second-guessing their own ideas when they conflicted with AI’s recommendations, while 46% said they now rely on advice from AI more than that of their own business colleagues.”

Those are big numbers and reports of AI’s success are still few. More importantly, “a joint study conducted by Carnegie Mellon and Microsoft found that knowledge workers who trusted the accuracy of generative AI systems had a lower propensity for critical thought.” That is scary.

But not surprising because “when humans are confident that a task has been competently automated, we tend to take a backseat and let the system do its thing — sometimes literally, as in the case of self-driving cars.”

A second study done by Anthropic research found similar results. They found that AI assistants had little impact on productivity, but did cause “a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand.” So losing skills should be a concern for all workers, high or low in the organization.

Writing helps you think

Forbes reaches similar conclusions but through a different approach. Gil Press says that “relying on AI for writing means zero thinking.” As another respected journalist says, a point “not obvious “to boosters of AI in education: I changed my mind several times about what I wanted to say. Writing is like that. It helps you think. Often I don’t have an opinion until I try to write it.” Or consider what insiders are saying about Amazon: “Damn the thinking, full speed AI.” This has become the mantra “even at a company that for years has started each important meeting with everybody reading in silence a six-page document describing the product or project to be discussed. But now, Amazon’s leadership is encouraging employees to let AI write for them.”

Press argues that a big worry “is that Amazon is losing sight of writing’s centrality in its deliberative, thoughtful culture as it pursues powerful, new tools. Writing is thinking. That was the whole point of Amazon’s writing culture. I can’t tell you how many times I changed my mind when writing a narrative. And even when I didn’t, my arguments were more precise for having written them down. Now we have chatbots writing six-pagers to be summarized [for managers] by other chatbots.”

One psychiatrist worries about those people who rely too much on AI. “Søren Dinesen Østergaard, the Danish psychiatrist who predicted the affliction now commonly known as “AI psychosis,” warned that academic scholars risk accruing a ‘cognitive debt’ when they outsource their work to AI chatbots.”

Forced usage; yet, many companies are forcing workers down a road of lower critical thinking and perhaps lower performance. According to the Wall Street Journal, big companies such as Google, Amazon, and Google are evaluating employees by how much they use AI: “The tech industry has moved to the next phase: tracking their workers’ use of AI tools—and enforcing it if they have to.” If AI is so good, why do they have to force employees to use AI? but that is another issue.

A survey found that around 42% of tech-industry workers said their direct manager expects AI use in day-to-day work, up from 32% just eight months before. So the workers are being forced to run down the AI road towards a world of skill reduction.

In one company, employees get an AI competency score from one to five—scoring a five if they create systems that improve the workflow of others. “It has also created a new award: Whoever comes up with the most effective AI-driven process wins a vacation stipend worth several thousand dollars.” This makes more sense because devising a new workflow will likely encourage more critical thinking.

At Amazon, more than a half a dozen current and former corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, said that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing.

Encouraging software engineers to write more code

But the scariest example from the story involved Meta’s new performance review system “The system can track how many lines of code an engineer wrote with AI and includes AI tools that provide individuals with insights about their own impact, which they can use for self-evaluations.” In addition to the reduction in skills mentioned above, there is also the problem of what happens when bad code is generated and impacts downstream workers. And we know that AI-assistants generate bad code and it is harder to find coding errors from AI assistants than from human coders. Thus writing more lines of code with AI tools is not necessarily the best way to proceed.

And this problem is relevant for every type of work because we know that AI slop is common and impacts on other workers. Many studies have found chaos from using AI intensely and many new people brought in to fix the slop – bad text and bad illustrations – that is generated by AI.

The emphasis on this use of AI reminds me of universities measuring researchers by how many papers they publish, without regard for quality. That has resulted in too many papers and not enough high-quality ones. In the same way, companies that measure coders by how many lines of code they write will also result in too many lines of useless code. AI can help improve productivity, but a better way of using AI and of encouraging them to do so is needed. Meanwhile, the AI bubble is looking like it will pop.

Jeff Funk is a retired professor, winner of the NTT DoCoMo mobile science award and the author of six books, most recently Unicorns, Hype and Bubbles.


Jeffrey Funk

Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Jeffrey Funk is the winner of the NTT DoCoMo Mobile Science Award and the author of six books including his most recent one: Unicorns, Hype and Bubbles: A Guide to Spotting, Avoiding and Exploiting Investment Bubbles In Tech.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Will AI End Deep Thinking?