Mind Matters Natural and Artificial Intelligence News and Analysis

Eric Holloway

allure of the metaverse
Moving Forward to Metaverse, New Technology, Web3.0, Blockchain and the Next Layer of the Internet Concepts. Man Steps into Future Door. Futuristic Tone.

For BitHeaven’s Sake

A satirical short story on the transhumanist quest (and failure) to achieve immortality

Bob and Sue were on their way to church one morning. On their way they ran into their friend Fred. Fred was very wealthy, a billionaire in fact. Fred waved hi. Bob and Sue waved back. They asked Fred to come with them to church.  Fred said no, he had more important things to do. “What is so important,” asked Sue. “I’m off to the real deal,” beamed Fred. Bob looked confused. “Real deal about what?” “You have a fake promise of eternal life. I’m about to get the real thing.” “You can’t be serious. Start talking some sense.” “Seriously. Here’s my voucher, see it right here.” Sue grabbed the piece of paper from Fred and read it aloud. “Good for one digital immortalization Read More ›

writing text in floating boxes
Businessman showing online document validation icon, Concepts of practices and policies, company articles of association Terms and Conditions, regulations and legal advice, corporate policy

AI and Human Text: Indistinct?

Here's a mathematical proof that challenges the assumption that AI and human-made text are the same

What is a poor teacher to do? With AI everywhere, how can he reliably detect when his students are having ChatGPT write their papers for them? To address this concern, a number of AI text detector tools have emerged.  But do they work? A recent paper claims that AI generated text is ultimately indistinguishable from human generated text. They illustrate their claim with a couple experiments that fool AI text detectors by simple variations to AI generated text. Then, the authors go on to mathematically prove their big claim that it is ultimately impossible to tell AI text and human text apart. However, the authors make a crucial assumption. Faulty Premises The proof assumes that AI generated text will become closer and closer to Read More ›

three-pigs-laptop

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish

We can do a simple experiment that demonstrates the difference between AI and human intentionality. ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past. In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future. In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has Read More ›

moon
Moon detailed closeup

We Can’t Build a Hut to the Moon

The history of AI is a story of a recurring cycle of hype and disappointment

Once upon a time there live a tribe who lived on the plains. They were an adventurous tribe, constantly wanting to explore. At night they would see the moon drift lazily overhead, and became curious. How could they reach the moon? The moon was obviously higher than their huts. Standing on the highest hut no one could reach the moon. At the same time, standing on the hut got them closer to the moon. So, they decided to amass all their resources and build a gigantic hut. Reason being that if standing on a short hut got them closer to the moon, then standing on a gigantic hut would get them even closer. Eventually the tribe ran out of mud and Read More ›

data transfer
Big data technology and data science illustration. Data flow concept. Querying, analysing, visualizing complex information. Neural network for artificial intelligence. Data mining. Business analytics.

ChatGPT Violates Its Own Model

Based on these exchanges, we can at least say the chatbot is more than just the ChatGPT neural network

Here is a quick overview of how ChatGPT operates under the hood. This will make it easier to spot suspicious behavior. The following is at a very high level.  For the gory details, see the following two guides: – The Illustrated GPT-2 (Visualizing Transformer Language Models) – Jay Alammar – Visualizing machine learning one concept at a time. (jalammar.github.io) – The GPT-3 Architecture, on a Napkin (dugas.ch) What is ChatGPT? Let’s start with what ChatGPT is. ChatGPT is a kind of machine learning algorithm known as a neural network. To understand what a neural network is, recall your algebra classes. You remember being given a set of equations and being told to solve for some variables. Then you learned you Read More ›

isaac newton and the apple
Law of Universal Gravitation education funny concept. Isaac Newton under an apple tree

Blinded by a Defunct Theory

The "interaction problem" is everywhere we look in physics, but the dogma of materialism remains

Materialism. What a weird word. It sounds like a ghost, materializing in front of me. And it is sort of like a ghost, one that has mysteriously taken over the minds of many intelligent people. Because they believe in materialism, these smart people don’t believe in ghosts. Especially the ghost in the machine. The problem is there is no way for the ghost to interact with the machine. This is known as the “mind-body interaction problem”.  The great thing about materialism is at least that theory doesn’t have an interaction problem. Any material thing can interact with any other material thing. Yet there is a deep irony. Let’s explore the idea of materialism to see why. Materialism is the idea that reality only consists of matter. Read More ›

scared-businessman-hiding-behind-a-pc-stockpack-adobe-stock
Scared businessman hiding behind a pc

Found! ChatGPT’s Humans in the Loop!

I am the only writer I’ve been able to discover who is suggesting ChatGPT has humans in the loop. Here is a series of telling excerpts from our last conversation…

The new ChatGPT chatbot has wowed the internet. While students revel in the autogenerated homework assignments, the truly marvelous property of ChatGPT is its very humanlike interactions. When you converse with ChatGPT you could swear there was a human on the other end, if you didn’t know better. For all intents and purposes, ChatGPT has achieved the holy grail of AI and passed the Turing test, on a global scale. Always quick to snatch a deal, Microsoft is currently in talks to spend a mere $10B to acquire half “the lightcone of all future value.” However, things are not always what they seem. Previously, I pointed out aspects of ChatGPT that implied humans were helping craft the chatbot’s responses. Now, Read More ›

technology-and-network-concept-stockpack-adobe-stock
Technology and network concept

Is ChatGPT Solely a Neural Network? I Tested That…

Changing the random number test to a "computer easy, human hard" test requires simply that we ask ChatGPT to reverse the random number. It couldn't.

ChatGPT is a direct descendent of GPT-3, and is a fancy form of a fancy machine learning algorithm called a neural network. For an overview of all of ChatGPT’s neural network complexity, here is a fun article. However, all that is beside the point. The important thing about a neural network: It can only generate what is in its training data. Therefore, ChatGPT can only produce what is in its training data. ChatGPT’s training data does not include the conversation you or I are having with ChatGPT. Therefore, if something novel occurs in the conversation, ChatGPT cannot reproduce it. That is, if ChatGPT is a neural network. Conversely, if ChatGPT reproduces novel text from the conversation, then ipso facto ChatGPT is not a Read More ›

chatbot-conversation-person-using-online-customer-service-with-chat-bot-to-get-support-artificial-intelligence-and-crm-software-automation-technology-virtual-assistant-on-internet-stockpack-adobe-stock
Chatbot conversation. Person using online customer service with chat bot to get support. Artificial intelligence and CRM software automation technology. Virtual assistant on internet.

Yes, ChatGPT Is Sentient — Because It’s Really Humans in the Loop

ChatGPT itself told me there could be humans crafting its input. My tests indicate that that’s likely true

OpenAI, recently released a new AI program called ChatGPT. It left the internet gobsmacked, though some were skeptical, and concerned about its abilities. Particularly about ChatGPT writing students’ homework for them! [ChatGPT] also appears to be ominously good at answering the types of open-ended analytical questions that frequently appear on school assignments. (Many educators have predicted that ChatGPT, and tools like it, will spell the end of homework and take-home exams.) Kevin Roose, “The Brilliance and Weirdness of ChatGPT” at New York Times (December 5, 2022) The really amazing thing is ChatGPT’s humanlike responses. They gives an observer an unnerving suspicion that the AI is actually sentient. Maybe it is actually sentient. Wait, what? You heard me. The AI is Read More ›

mouse-cursor-clicking-captcha-im-not-robot-checkbox-stockpack-adobe-stock
Mouse Cursor Clicking CAPTCHA

CAPTCHA: How Fooling Machines Is Different From Fooling Humans

Automated censorship is intended to protect against a tidal wave of spam but it could certainly have other uses…

Readers of Mind Matters News have likely heard of the iconic Turing test. Computer pioneer Alan Turing famously invented a test to determine whether a program could pass as a human. The gist is, if a program can fool human testers into believing it is a human, then the program is intelligent. Not everyone is convinced. Thing is, it doesn’t take much to fool us humans! Take Eliza , a program of only a few hundred lines, written in the 60s, which fooled many people into believing it was a real human therapist. But what if we flip the Turing test on its head? Instead of a test where a program tries to pass as human, we use a test Read More ›

ai-generated-wooden-board
AI-generated digital art of a wooden board

AI Art Is Not “AI-Generated Art.” It is Engineer-Generated Art

The computers aren’t taking over the art world. The engineers are. Just the way engineers have taken over the music world with modern electronic music

Creativity is a mysterious thing. Our world economy is powered by creativity, yet despite the best efforts of our best engineers, creativity has not been captured by a machine. Until recently. With the new school of AI things have changed. We now have GPT-3 that can digress at length about any topic you give it. Even more remarkable, we have the likes of Dall-E, Midjourney, and Stable Diffusion. These phenomenal AI algorithms have scaled the peak of human creativity. AI can now create art that has never been seen before: The new artistic AI has become so successful the image social networks have become flooded with their artwork. Some communities have even banned the AI art. But the AI art Read More ›

chocolate-and-vanilla-bourbon-ice-creams-stockpack-adobe-stock
Chocolate and vanilla bourbon ice creams

How We Know the Mind Is About Information, Not Matter or Energy

The computer program’s world is one of binary 0 or 1 decisions but the physical world is one of many different shades of more or less

It’s really hard to picture the “mind,” isn’t it? You might think of wavy ghosts, or a spectral light. But nothing very definite. The brain, on the other hand, is very easy to visualize. Images and videos are just a Google away. That’s why it’s easy to assume that our brains are the entities that do our thinking for us. The brain is not only easy to image, it is physical. We can (in theory) touch it. Poke it. The brain even runs off electricity, just like your computer. But what makes a computer run Windows? It isn’t just the transistors on silicon wafers. It isn’t just the electricity coursing through the circuits. Windows itself is a ghostly being, like Read More ›

black-metal-spiral-staircase-in-grayscale-photography-stockpack-unsplash
black metal spiral staircase in grayscale photography

How AI Neural Networks Show That the Mind Is Not the Brain

A series of simple diagrams shows that, while AI learns faster than the human brain, the human mind tackles problems that stump AI

Recently, I’ve been arguing (here and here, for example) that we can use artificial neural networks (ANNs) to prove that the mind is not the brain. To recap, here is the logic of my argument: Premise A: neural networks can learn better than the brainPremise B: the human mind can learn better than a neural networkConclusion: the human mind can learn better than the brain, therefore it is not the brain This means if we can conclusively show the human mind can learn better than a neural network, then the mind is not the brain. For Premise A, I’ve argued that the differentiable neural network is a superior learning model compared to the brain neuron’s “all or nothing principle”. The Read More ›

3d-illustration-embossed-mesh-representing-internet-connections-cloud-computing-and-neural-network-stockpack-adobe-stock
3D illustration, embossed mesh representing internet connections, cloud computing and neural network.

Can Computer Neural Networks Learn Better Than Human Neurons?

They can and do; when artificial intelligence programmers stopped trying to copy the human neuron, they made much better progress

Neural networks are all the rage in computing these days. Many engineers think that, with enough computer power and fancy tweaks, they will become as smart as people. Recent successes playing games and predicting protein folds pour gasoline on the AI fire. We could be on the edge of the mystical Singularity, when humans and computers will merge and we become immortal gods. Or not. Let’s wind the clock back to the beginning of neural networks. In computer science terms, they are actually a very old technology. The earliest version, called a perceptron, (a single-layer neural network) was invented in the 1960s, inspired by McCulloch and Pitt’s early model of brain neurons. But, the perceptron was ignored for decades because Read More ›

Machine learning , artificial intelligence , ai, deep learning blockchain neural network concept. Brain made with shining wireframe above multiple blockchain cpu on circuit board 3d render.

Artificial neural networks can show that the mind isn’t the brain

Because artificial neural networks are a better version of the brain, whatever neural networks cannot do, the brain cannot do.

What is the human mind? AI pioneer Marvin Minsky (1927–2016) said in 1987 that essentially “Minds are what brains do.” That is, the mind is the result of electrical waves cycling through the brain, as neurons spike and synapses transmit signals. But is that true? Can we test this idea? We can indeed, using artificial neural networks. One of the most popular approaches to artificial intelligence is artificial neural networks. These networks, inspired by an early model of how neurons fire (the McCulloch–Pitts model), consist of nodes, where each node is similar to a neuron. A node receives signals and then sends them to its linked nodes based on an activation function. There are, of course, differences between neural networks Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

Google’s Chatbot LaMDA Sounds Human Because — Read the Manual…

What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise”

Recently Google employee Blake Lemoine caused a media storm over the LaMDA chatbot he was working on, that he claims is sentient (it feels things like a human being). A heavily edited transcript has been released that shows him and a collaborator having a very coherent conversation with LaMDA. Many have been quick to dismiss his claims about the chatbot’s sentience, accusing the Googler of falling prey to the Eliza effect: anthropomorphizing a probability distribution over words (thus believing that he is talking to a human). The accusation is that Lemoine generated a large number of dialogs, then edited down the exchange to create a coherent narrative. Google placed Lemoine on leave, technically for breaking the non-disclosure agreement (NDA) that Read More ›

change-chance-stockpack-adobe-stock
Change Chance

The Salem Hypothesis: Why Engineers View the Universe as Designed

Not because we're terrorists or black-and-white thinkers, as claimed. A simple computer program shows the limits of creating information by chance

In the fun-filled world of internet debate between creationists and evolutionists, we encounter the Salem Hypothesis: Creationists tend to be engineers. Many explanations have been offered for this phenomenon (apparently named after Talk Origins contributor Bruce Salem): engineers are closet terrorists creationists are trying to protect their fragile beliefs a desire to exert authority engineers like simple black and white answers There’s a reason internet forums are not known for flattering character analysis! Anyhow, the true reason for the Salem Hypothesis is summed up in this graph. Read on to find out why. Engineers are more likely to be creationists because they are familiar with what it takes to design complex things for specific tasks. Which is exactly what we Read More ›

weasel-peering-out-of-a-burrow-stockpack-adobe-stock
weasel peering out of a burrow

Dawkins’ Weasel Program vs the Information Life Acquires En Route

To demonstrate what is wrong with fully naturalist assumptions like those of Richard Dawkins’ Weasel program, I developed Weasel Libs, modeled on Mad Libs

In his famous Weasel program zoologist and philosopher Richard Dawkins shows that the simple combination of random mutation and natural selection (Darwinian evolution) can produce the English sentence, “Methinks it is like a weasel”, in a short time period. The point of his program is to demonstrate that evolution can generate the complex, pre-specified DNA sequences we find in biology before the heat death of the universe. His argument sounds persuasive because both English sentences and DNA sequences are made up of symbols. Both can be randomly modified anywhere, and by cumulative selection, they can plausibly adapt to the environment in reasonably short order. Writers in English can learn to pen best-selling novels through trial and error and audience feedback. Read More ›

chatbot-social-bot-mit-quellcode-und-keyboard-stockpack-adobe-stock
Chatbot / Social Bot mit Quellcode und Keyboard

Why GPT-3 Can’t Understand Anything

Without long-term memory, human conversation becomes impossible

There is a mathematical reason why machine learning systems like GPT-3 are incapable of understanding. The reason comes down to the fact that machine learning has no memory. It is just probabilistic associations. If there is only a 10% chance of going off topic, then after just seven exchanges there is a greater than 50% chance the machine learning model has gone off topic. The problem is that when prediction is just based on probabilities, the likelihood of making a misprediction increases exponentially. A long-term memory is needed in order to maintain long-term coherence. GPT-3 is essentially a sophisticated Markov process. What is important about the Markov process is that the next step in the process is only dependent on Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

AI Companies Are Massively Faking the Loophole in the Turing Test

I propose the Turing Test be further strengthened by presuming a chatbot is human until proven otherwise

Computer pioneer Alan Turing was posed the question, how do we know if an AI has human like intelligence? He offered his famous Turing test: If human judges cannot differentiate the AI from a human, then it has human-like intelligence. His test has spawned a number of competitions in which participants try to fool judges into thinking that a chatbot is really a human. One of the best-known chatbots was Eugene Goostman, which fooled the judges into thinking it was a 13-year-old boy — mostly by indirection and other distraction techniques to avoid the sort of in-depth questioning that shows that a chatbot lacks understanding. However, there is a loophole in this test. Can you spot the loophole? What better Read More ›