Mind Matters Natural and Artificial Intelligence News and Analysis
education-laptop-student-teenager-studying-boy-learning-home-656956968-stockpack-adobestock
education laptop student teenager studying boy learning homework idea thinking dreaming student thoughtful looking education inspiration study young school
Image Credit: Lumos sp - Adobe Stock

Natural Intelligence Creates Information; AI Processes It

Human intelligence creates new information through free, purposeful choices, while AI can only process/transform existing data according to predetermined rules
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email
Glauco Amigo

This article was contributed by Glauco Amigo, Senior Algorithm Engineer, Snap-on Equipment. He holds a Ph.D. and an M.S. in Electrical and Computer Engineering from Baylor University.

Free choice and purpose are central to human intelligence. Abilities such as high IQ, memory, and computation, are only a narrow slice of the range. What distinguishes intelligence is the capacity to choose well for reasons and ends. If an action is the result of external manipulation, it does not express intelligence; free actions do. On this view, every genuinely free human choice generates new information.

Computers, by contrast, excel at processing information. They store, compute, apply rules, and discover patterns, but they do not originate information in the human sense. Thus, “artificial general intelligence” (AGI) can only refer to computational capability, a subset of human-level intelligence.

The definitional hinge of artificial general intelligence (AGI)

If intelligence is fast, adaptive computation, AGI is a moving yet tractable target: calculators outpace us in arithmetic; contemporary systems surpass us in search and large-scale pattern discovery. But if intelligence involves agency — the capacity to choose for a purpose — computation alone does not suffice. Computation does not yield free will or intrinsic normativity. Constrain intelligence to computation and AGI appears likely; include agency, meaning, and responsibility, and AGI recedes.

Two kinds of information

To clarify, there are two concepts of information at work here. Shannon information I(x)1 measures how surprising the message x is under a probability model — the bits needed for reliable transmission. It tracks coding efficiency, not meaning. Algorithmic information K(x)  (Kolmogorov complexity) is the length of the shortest program that generates x on a universal computer. Incompressible strings have high K(x); regular strings have low K(x). Conditional algorithmic complexity K(xy) asks how much additional information is minimally required to produce x given context y. These distinctions separate efficient transmission (I(x)) from intrinsic descriptive content (K(x)) and raise the question of where the “shortest program” originates.

Figure 1. Sora’s depiction of Shannon, Kolmogorov, Solomonoff, Turing, and Chaitin in a spacecraft.

I propose two claims:

1) Natural intelligence creates algorithmic information (K(x))
2) Artificial intelligence does not create algorithmic information (K(x))

Regarding (2): computation follows rules encoded in programs and inputs; a program precedes its execution. Computation can rearrange, compress, or surface structure, but it cannot originate descriptive content beyond what is present in the program and inputs.2
Claim (1) turns on whether human choices genuinely originate information. In a strict materialist view, all actions are determined by prior physical states; free will is illusory; intelligence reduces to computation. If so, human choices cannot add algorithmic content beyond what physics already entails. By contrast, if free will is real, human decisions bring forth new algorithmic content not derivable from prior states. Purposeful choice, in this sense, writes the “program.”

Intelligence, choice, and the creation of information

Consider choice as information creation. An equiprobable binary decision yields one bit; richer option sets yield more. Choices gain meaning by serving purposes: they form structured sequences —plans— that transform a context C into an outcome O. The minimal amount of informational content required to achieve O from C is K(OC). In human life, agency instantiates such content, even if not maximally compressed or perfectly aligned with intended outcomes. If intelligence requires free will, and free will creates new algorithmic information, then AGI in the human sense is unattainable by computation alone. Computation transforms given information; intelligence, through free decision, originates it.

Computation, novelty, and the limits of origination

Where does apparent novelty in AI systems come from? From inputs (training data, prompts, sensory streams), from human-specified architectures and objectives, from stochasticity (random seeds, sampling), and from deployment contexts. These are the channels through which information enters. The system reshapes and routes it. Surprise is not origination. Randomness can raise K(x), but novelty then traces to stochastic inputs, not to agentive reasons.

The Einstein Test: What AI discoveries would mean

What if an AI produced a major scientific discovery — say, deriving relativity or solving quantum gravity (see Sam Altman’s AGI threshold)? That would be astonishing and would illuminate the scope of computation in science. It would not instantiate agency, normativity, or responsibility. It would show what computation can do with immense data, architecture, and search, not that computation has acquired the capacity to choose for purposes. Spectacular computation can mimic outputs of discovery without originating the purposive standards that guide discovery.

Accountability and the “who decided?” test

Accountability points to the same conclusion. Any decision an AI appears to make traces to designers, deployers, and users: architectures and loss functions, curated data, prompts, constraints, and operational context. Even instructions to “choose your own goals” are meta-goals set by humans. Responsibility remains human because purpose remains human. Treat AI as a computational amplifier — extraordinarily fast and flexible — but not an originator of ends or meaning. Align objectives and proxies with care: mis-specified purposes predictably yield misaligned outcomes.

Conclusion

Human intelligence creates information through free, goal-directed decisions; machines process information under given rules, data, and contexts. Computation can astonish, accelerate, and augment, but it does not originate purpose or normativity. If intelligence essentially involves agency, AGI in the human sense is not attainable by computation alone. If intelligence is restricted to computation, AGI becomes a definitional finish line that sufficiently powerful machines can cross. The central question is not whether machines can compute as well as we can, but whether computation, however vast, can replace the human capacity to choose for a purpose. It cannot.

[1] The formal definition of Shannon self-information is: I(x) = – log2 P(x), where P(x) is the probability of message x.

[2]  Within algorithmic information theory, for any computable function f, K(f(x)) ≤ K(x) + cf, where cf depends on f but not on x.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Natural Intelligence Creates Information; AI Processes It