Mind Matters Natural and Artificial Intelligence News and Analysis
in-an-office-man-as-a-cyborg-transhumanism-works-at-a-desk-w-613554011-stockpack-adobe_stock
in an office, man as a cyborg transhumanism, works at a desk with a desktop computer
Image Credit: wetzkaz - Adobe Stock

Harari, Humans, Algorithms, and AI

A futurist’s warning: AI may soon redefine the boundaries of human agency
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Recently, famed technologist Yuval Noah Harari spoke at the 2026 World Economic Forum summit in Davos, Switzerland. Harari, known for his “history of the future” approach to examining questions on technology, transhumanism, and AI, made some startling claims about the direction he sees AI moving.

First, he claimed that AI would soon take over “anything made of words.” This includes laws, books, and religions, especially those rooted in holy texts such as Judaism, Christianity, and Islam. Second, this takeover will occur because “As far as putting words in order is concerned, AI already thinks better than many of us.” AI, with its ability to analyze books (and therefore laws and religious texts) in seconds, will soon become the undeniable “expert” of those domains.

As a starting point, I think we can, at a basic level, grant Harari that AI will be able to read collected texts faster than any human. It can already write words faster than humans can. At the moment of this writing, AI would be able to create a piece roughly this length in a fraction of the time it will take me.

Despite all of this, I think Harari is fundamentally wrong about his claims. His claims make sense as far as his logic goes; it is not his logic, but rather his metaphysics, that are the problem. I will examine each metaphysical issue in turn.

The Anthropological Assumption

The first issue for Harari is the anthropomorphization of the techno-sphere. At Davos, Harari refers to AI as “thinking,” and in his work Homo Deus, he describes information as a kind of agent.[1] While subtle, this anthropomorphization quietly hints at a deeper anthropological assumption: that humans are fundamentally like computers. Note that I do not mean Harari is indicating that humans and computers are equivalent; rather, I am suggesting that Harari sees humans as a kind of computer.

This sort of view is common amongst physicalists, who tend to reduce human mental states to mere functional description. In this view, called functionalism, mental states (feeling pain, remembering, being in love) are what they are because of the functions they perform in brain chemistry. The mental state of being in pain, for example, is the state that occurs when certain physical stimuli cause resultant electrochemical reactions in the brain. Much like we call a washing machine a washing machine because it washes, functionalists hold that pain is defined by its functional role in our neurochemistry.

Consider such a system for a minute: if we have physical stimulus X, it produces electrochemical reaction Y, which produces mental state Z. Further, any time stimulus X occurs, it will produce Y and Z.

In the functionalist mindset, physical stimuli can predicably produce certain mental states because human electrochemistry functions algorithmically. Every human mental state is entirely predictable if we can discover the corresponding electrochemical algorithms. Just like a computer functions predicably if we understand the underlying code, so to will a human.

This anthropological starting point assumes the similarity of humans to computers, not the other way around. It is also this assumption that causes the technofuturists and transhumanists such exhilaration when they see potential improvements in AI and computing power; if such improvements can happen to computers, they can happen to humans, too.

There are many reasons why we might reject this anthropological assumption, but such an argument exceeds this article’s scope. Robert Marks has provided a wonderful critique of algorithmic anthropology in chapter 1 of his book Non-Computable You: What You Do That Artificial Intelligence Never Will. Rather than trying to provide a truncated argument here, I commend that to readers.

Regardless of reasons for rejecting the algorithmic view, the larger point is this: if we reject an algorithmic view of humans, then Harari’s claims are on thin ice. If humans are not algorithmic, AI will never be able to think the way humans do.

Human Exceptionalism and Religious Experience

Another connected assumption Harari makes is that, because humans are algorithmic, their experiences must also be algorithmic. AI, when powerful enough, will be able to “experience” things like books, laws, and text-based religions in the same way that humans can. Further, given that humans and AI can share these experiences, and AI can “think” faster than humans can, it is only a matter of time before AI can do a better job of experiencing these textual objects.

I think even a simple reading of history demonstrates the problems of this assumption. Unless we assume that humans are a kind of walking data center running on a biological computer, it should be clear that human thinking is more than simply having access to texts. Humans think in creative, often unexpected ways. Part of the reason humans get PhDs, for example, is to look at old problems in new ways.

As a Christian theologian, I can happily hold that humans are created in the Imago Dei, the image of God. In other words, humans share in the creative capacities of the supreme creator. The God who created the world from nothing bestows upon humans the ability to also be creative. If this is true, then humans can do something that computers or AI never will: be genuinely creative.

Further, Harari’s assumption holds that mere access to material makes one an expert. Perhaps this is all well and good, but such a definition includes only knowledge, not the wisdom on how to use it. I can easily imagine a world in which an AI has access to all the works of Thomas Aquinas and all the works about Thomas.

I can imagine asking it what Thomas said about X or how the majority of interpreters understand him when he says Y. What the AI can never do, though, is tell me how I should think about Thomas’s views on natural law or his construction of divine simplicity. It may be able to tell me what others have said, but it can never give me an experientially informed opinion in the same way that an 80-year-old priest who has been studying Thomas every morning at 5 might be able to. AI knows what the interpreters have said, but it has not deeply pondered Thomas or spent the majority of its life praying to the Lord Jesus. Despite the fact that AI might be about to couch its language in conversational tones, AI is primarily a system of whats, not hows or whys.

AI, at its best, can present propositions, but it cannot provide the wisdom necessary to interpret those propositions. AI might be a great repository of information, but it will never be an expert because it will never have the wisdom or experience necessary to truly understand life or how to live it.

Conclusion

For those who reduce humans to a kind of algorithmic functionalism, Harari’s claims and predictions make sense. If humans are like computers, one can easily imagine how computers could eventually usher in a new kind of reality. The metaphysical assumptions one holds about AI and technology shape how one sees both the teleology and eschatology of such things.

If Harari’s assumptions are true, his predictions follow logically. But if we reject the reduction of humans to algorithms, his claims reveal a profound confusion: mistaking information for wisdom, speed for insight, and the capacity to process words for the ability to understand life. The real irony? It takes uniquely human, and therefore non-algorithmic, creativity to imagine such a future in the first place.


[1] See Chapter 11, Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2017)


No profile retrieved
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Harari, Humans, Algorithms, and AI