How Materialism Handicaps Us in Understanding AI’s Limits
Sabine Hossenfelder acknowledges AI’s limits, yet she is convinced that it will become consciousIn “Scientists warn of AI collapse,” theoretical physicist Sabine Hossenfelder warns, “We’ve all become used to AI-generated art in the form of text, images, audio, and even videos. Despite its prevalence, scientists are warning that AI creativity may soon die. Why is that? What does this mean for the future of AI? And will human creativity be in demand after all? Let’s have a look.”
She discusses the problem that chatbots and other generative AI create; they end up reprocessing and degrading their own information, essentially eating their own tails:
[1:28] The more AI eats its own output the less variety the output has. For example in a paper from November, a group of scientists from France tested this for a large language model. They used an open source model called OPT from Meta and developed several measures for diversity of language. Then they test what happens for the diversity of language for tasks requiring different levels of creativity. For example, summarizing a news article requires low creativity, writing a story from a prompt requires high creativity. In this table they summarize the language diversity score for the levels of training iteration. As you can see, they pretty much all drop. The language diversity drop is especially rapidly for story telling. A similar finding was made earlier by a group from Japan for AI generated images based on stable diffusion. The AIs decrease the diversity of the image (March 5, 2024)
That’s most likely because AI is not a source of creativity, as Robert J. Marks and Eric Holloway pointed out here recently.
But she still thinks they will attain human-like consciousness
And yet, in another episode, days later, “New Rumours that AI Has Become Sentient” Hossenfelder assesses a researcher’s claim that an AI program has shown signs of sentience.
While not entirely on board with that, she asserts,
[0:04] I’m convinced that it won’t be long until a computer program will reach human level intelligence and also become conscious. But I don’t think we’re quite there yet. … The reason I think it’s basically certain that computer programs will become conscious is that there’s nothing special going on in neurons that can’t be reproduced on a computer. It’s just that the brain has a starting advantage of some billion years of evolution, and that includes a lot of hardwired function. The rest is basically biological — machine learning before it was cool. But eventually, technological evolution will catch up to bio-evolution. (March 8, 2024)
Wait. Human consciousness is immaterial. So, probably, is the creativity that flows from it. If a problem is not material or technological in origin, it won’t have a material or technological solution.
But, materialism requires Hossenfelder to assume that a technological solution must exist and therefore that a computer program, despite its evident limitations, can become conscious. It’s helpful to keep in mind that such a position is not something the materialist derives from the evidence; it is imposed by the ideology.
You may also wish to read:
Model collapse: AI chatbots are eating their own tails. The problem is fundamental to how they operate. Without new human input, their output starts to decay. Meanwhile, organizations that laid off writers and editors to save money are finding that they can’t just program creativity or common sense into machines.
and
Programmers: Why materialism can’t explain human creativity. Eric Holloway and Robert Marks explain why it’s unlikely that the mind that enables human creativity is merely the product of animal evolution. The total space-time information capacity of the universe falls significantly short of the ability to generate meaningful text of only a few hundred letters.