Mind Matters Natural and Artificial Intelligence News and Analysis

Eric Holloway

Chatbot / Social Bot mit Quellcode im Hintergrund

Google’s Chatbot LaMDA Sounds Human Because — Read the Manual…

What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise”

Recently Google employee Blake Lemoine caused a media storm over the LaMDA chatbot he was working on, that he claims is sentient (it feels things like a human being). A heavily edited transcript has been released that shows him and a collaborator having a very coherent conversation with LaMDA. Many have been quick to dismiss his claims about the chatbot’s sentience, accusing the Googler of falling prey to the Eliza effect: anthropomorphizing a probability distribution over words (thus believing that he is talking to a human). The accusation is that Lemoine generated a large number of dialogs, then edited down the exchange to create a coherent narrative. Google placed Lemoine on leave, technically for breaking the non-disclosure agreement (NDA) that Read More ›

change-chance-stockpack-adobe-stock
Change Chance

The Salem Hypothesis: Why Engineers View the Universe as Designed

Not because we're terrorists or black-and-white thinkers, as claimed. A simple computer program shows the limits of creating information by chance

In the fun-filled world of internet debate between creationists and evolutionists, we encounter the Salem Hypothesis: Creationists tend to be engineers. Many explanations have been offered for this phenomenon (apparently named after Talk Origins contributor Bruce Salem): engineers are closet terrorists creationists are trying to protect their fragile beliefs a desire to exert authority engineers like simple black and white answers There’s a reason internet forums are not known for flattering character analysis! Anyhow, the true reason for the Salem Hypothesis is summed up in this graph. Read on to find out why. Engineers are more likely to be creationists because they are familiar with what it takes to design complex things for specific tasks. Which is exactly what we Read More ›

weasel-peering-out-of-a-burrow-stockpack-adobe-stock
weasel peering out of a burrow

Dawkins’ Weasel Program vs the Information Life Acquires En Route

To demonstrate what is wrong with fully naturalist assumptions like those of Richard Dawkins’ Weasel program, I developed Weasel Libs, modeled on Mad Libs

In his famous Weasel program zoologist and philosopher Richard Dawkins shows that the simple combination of random mutation and natural selection (Darwinian evolution) can produce the English sentence, “Methinks it is like a weasel”, in a short time period. The point of his program is to demonstrate that evolution can generate the complex, pre-specified DNA sequences we find in biology before the heat death of the universe. His argument sounds persuasive because both English sentences and DNA sequences are made up of symbols. Both can be randomly modified anywhere, and by cumulative selection, they can plausibly adapt to the environment in reasonably short order. Writers in English can learn to pen best-selling novels through trial and error and audience feedback. Read More ›

chatbot-social-bot-mit-quellcode-und-keyboard-stockpack-adobe-stock
Chatbot / Social Bot mit Quellcode und Keyboard

Why GPT-3 Can’t Understand Anything

Without long-term memory, human conversation becomes impossible

There is a mathematical reason why machine learning systems like GPT-3 are incapable of understanding. The reason comes down to the fact that machine learning has no memory. It is just probabilistic associations. If there is only a 10% chance of going off topic, then after just seven exchanges there is a greater than 50% chance the machine learning model has gone off topic. The problem is that when prediction is just based on probabilities, the likelihood of making a misprediction increases exponentially. A long-term memory is needed in order to maintain long-term coherence. GPT-3 is essentially a sophisticated Markov process. What is important about the Markov process is that the next step in the process is only dependent on Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

AI Companies Are Massively Faking the Loophole in the Turing Test

I propose the Turing Test be further strengthened by presuming a chatbot is human until proven otherwise

Computer pioneer Alan Turing was posed the question, how do we know if an AI has human like intelligence? He offered his famous Turing test: If human judges cannot differentiate the AI from a human, then it has human-like intelligence. His test has spawned a number of competitions in which participants try to fool judges into thinking that a chatbot is really a human. One of the best-known chatbots was Eugene Goostman, which fooled the judges into thinking it was a 13-year-old boy — mostly by indirection and other distraction techniques to avoid the sort of in-depth questioning that shows that a chatbot lacks understanding. However, there is a loophole in this test. Can you spot the loophole? What better Read More ›

visualization-of-virtual-world-stockpack-adobe-stock
Visualization of Virtual World

Does Information Weigh Something After All? What If It Does?

At the rate we create information today, one physicist computes that in 350 years, the energy will outweigh the atoms of Earth

In the 1960s, IBM researcher Rolf Landauer (1927–1999) observed that if the logical information in a computational system decreased, then the physical entropy in the system must increase (Landauer’s Principle). This conclusion follows from the principle that the entropy in a closed system can never decrease. A decrease in the logical information corresponds to a decrease in entropy. And factoring in the principle that the entropy cannot actually decrease, the physical system itself must increase in entropy when the information decreases. This increase in entropy will result in the emission of heat, and a reduction of energy in the system. Now Melvin Vopson, a physicist at the University of Portsmouth, has taken Landauer’s principle to the next logical step. He Read More ›

positive-girl-resting-on-the-couch-with-robot-stockpack-adobe-stock
Positive girl resting on the couch with robot

Soylent AI is…people!

OpenAI advertises itself as AI-powered, but at the end of the day, the system is human-powered

In the sci-fi movie, “Soylent Green,” the big reveal is that a food called soylent green is actually made from human beings, the catchphrase being “soylent green is people.” Likewise, as I discovered from a recent exchange with OpenAI’s GPT-3, “soylent AI is people.” GPT-3 is the product of AI company OpenAI. The company made headlines in 2019 with the claim that their AI model was too dangerous to publicly release. OpenAI is not a mere research company. While their publicly stated goal is fairly modest – “Aligning AI systems with human intent” – their CEO Sam Altman has bigger plans. He left his very successful role as president of Y Combinator, one of Silicon Valley’s most successful venture capital Read More ›

Curious weasel looks out from behind a rock

Dawkins’ Dubious Double Weasel and the Combinatorial Cataclysm

Dawkins has successfully reduced a combinatorial explosion to a manageable problem...or has he?

In Richard Dawkins’ book, The Blind Watchmaker, he proposed a famous (and infamous) computer program to demonstrate the power of cumulative selection, known as the “Weasel program.” The program demonstrates that by varying a single letter at a time, it is possible to rapidly evolve a coherent English sentence from a string of gibberish. The way the program works is as follows: First, a sequence of characters is randomly assembled by drawing from the 26 English letters and the space. Then, one character is randomly reassigned. The resulting sequence is compared to the phrase from Hamlet, a quote uttered by Polonius: “methinks it is like a weasel.” For every character that matches, a point is scored. If the new sequence Read More ›

large-cog-wheels-in-the-motor-stockpack-adobe-stock
Large cog wheels in the motor.

Can Computers –- and People — Learn To Think From the Bottom Up?

That’s the big promise made in a recent article at Aeon

Tufts University biologist Michael Levin and Columbia University neuroscientist Rafael Yuste have an ambitious project in hand: To explain how evolution “‘hacked’ its way to intelligence from the bottom up,” that is, from nothing. They base their thesis on computer science: This is intelligence in action: the ability to reach a particular goal or solve a problem by undertaking new steps in the face of changing circumstances. It’s evident not just in intelligent people and mammals and birds and cephalopods, but also cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks. Across all these scales, living things solve problems and achieve goals by flexibly navigating different spaces – Read More ›

cropped-shot-of-robot-playing-chess-on-wooden-surface-stockpack-adobe-stock
cropped shot of robot playing chess on wooden surface

Is AlphaZero Actually Superior to the Human Mind?

Comparing AI and the human mind is completely apples and oranges

The Google-backed AI company DeepMind made headlines in March 2016 when its AlphaGo game AI engine was able to defeat Lee Sedol, one of the top Go players in the world. DeepMind followed up this great achievement with the AlphaZero engine in 2017, which made the remarkable achievement of soundly beating AlphaGo in Go as well as one of the world’s best chess engines in chess. The interesting difference between AlphaGo and AlphaZero is that AlphaGo uses databases of top human games for learning, while AlphaZero only learns by playing against itself. Using the same AI engine to dominate two different games, while also discarding reliance on human games suggests that DeepMind has found an algorithm that is intrinsically superior Read More ›

ai-machine-learning-hands-of-robot-and-human-touching-on-big-data-network-connection-background-science-and-artificial-intelligence-technology-innovation-and-futuristic-stockpack-adobe-stock
AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

“Slightly” Conscious Computers Could Doom Atheism

That might sound surprising but let’s follow the logic of the “consciousness” claim through to its inevitable conclusion

Recently, Ilya Sutskever, co-founder of OpenAI, proposed that artificial intelligence (AI) may currently be “slightly” conscious. His claim was probably in reference to the GPT-3 AI that can generate text from a prompt. I’ve played with a couple of the linguistic neural networks a bit, and you can try them out here. Some of the output is quirky, which could be mistaken for personality and make the algorithm appear conscious. The algorithm also generates emotional statements, that can generate empathy in a human user of the system. Just as kids make believe their dolls are alive when they develop an emotional bond with their toy, the algorithm text generates empathy in the human user. It can make us feel a Read More ›

hands brain model
Hands shaping brain model

Chalmers and Penrose Clash Over “Conscious Computers”

Philosopher Chalmers thinks computers could be conscious but physicist Penrose says no

Two authors I’ve been reading recently are Roger Penrose and David Chalmers. Penrose is a physics Nobel laureate who has stoked controversy by claiming in The Emperor’s New Mind: Concerning Computers, Minds and The Laws of Physics (1989) that the mind can do things beyond the ability of computers. Chalmers is a philosopher of science who claims in The Conscious Mind: In Search of a Fundamental Theory (1997) that consciousness cannot be reduced to physical processes. Both thinkers are well respected in their fields, even though they articulate positions that imply that the mind’s operation is beyond current science. At the same time, they believe that there is a way to see the mind as part of nature (that is, Read More ›

3d-rendering-of-human-cell-or-embryonic-stem-cell-microscope-background-stockpack-adobe-stock
3d rendering of Human cell or Embryonic stem cell microscope background.

Are the Brain Cells in a Dish That Learned Pong Conscious?

Human-derived organoids learned faster than AI and always outperformed mouse-derived organoids in terms of volley length, raising troubling questions

Recently, science media were abuzz with a remarkable story about minibrains (mouse and human brain cells in a dish) learning to play the video game Pong: Scientists have successfully taught a collection of human brain cells in a petri dish how to play the video game “Pong” — kind of. Researchers at the biotechnology startup Cortical Labs have created “mini-brains“ consisting of 800,000 to one million living human brain cells in a petri dish, New Scientist reports. The cells are placed on top of a microelectrode array that analyzes the neural activity. “We think it’s fair to call them cyborg brains,” Brett Kagan, chief scientific officer at Cortical Labs and research lead of the project, told New Scientist. Tony Tran, Read More ›

african-clawed-frog-xenopus-laevis-stockpack-adobe-stock
African Clawed Frog (Xenopus laevis)

Is the Age of the Living, Self-Replicating Robot at Hand? No.

Stem cells naturally reproduce themselves. The researchers working with frog stem cells merely found, via algorithms, one configuration that works better

Recently, the sci-fi dream of self-replicating robots has been in the news, thanks to the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard. A recent experiment with frog cells was hailed by news outlets as disparate as CNN (“World’s first living robots can now reproduce, scientists say”) and Daily Wire (“American Universities Create First ‘Self-Replicating Living Robots’”). And it was also debunked by Ars Technica: (“Interesting research, but no, we don’t have living, reproducing robots”). So what’s really happening? Self-replication is a very tricky problem of information. To truly self-replicate, an organism must completely copy the information necessary for function. Seems simple enough but it introduces a conundrum. For the organism to copy Read More ›

billiard-balls-colorful-background-3d-rendering-stockpack-adobe-stock
Billiard balls colorful background, 3D rendering

Can the “Physical World” Be Wholly Physical? Physical at All?

Nothing ever physically touches anything else in the physical world, yet the effects of objects on each other are a constant occurrence

Sounds like the answer should be “Of course!” But the question may not be as simple as it appears. Let’s wind the clock back to the first century BC, when a Roman poet and philosopher named Lucretius wrote the poem On the Nature of Things.” In this poem, Lucretius outlines a philosophy known as Epicureanism in order to demonstrate the world can be explained without reference to a deity. In the Epicurean philosophy, only three things exist: atoms, the void, and the universe. Consequently, everything we see in the physical world can be reduced to atoms bumping into each other. You may notice that Epicurean philosophy sounds similar to modern day physics. This is not an accident. Through the influence Read More ›

cute-handmade-reborn-baby-doll-stockpack-adobe-stock
Cute handmade reborn baby doll

Is GPT-3 the “Reborn Doll” of Artificial Intelligence?

Unlike the reality doll collectors, GPT-3 engineers truly believe that scaling up the model size will suddenly cause GPT-3 to think and talk like a real human

There is a worldwide community that collects “reborn dolls.” These dolls look almost like real babies. Look again, closely, at the featured photo above… They help some collectors cope with the loss of a child. For others, it fulfills their sense of self image. And yet others just see them as a quirky hobby. Regardless of how much the baby dolls mimic the appearance of real dolls, the dolls will forever remain copies because the external appearances are not generated by biological processes. For the collectors, this is a feature, not a bug. They enjoy the appearance of a baby without the real life difficulties of raising a real person. As one collector comments, her doll “doesn’t turn into a Read More ›

human-brain-digital-illustration-electrical-activity-flashes-and-lightning-on-a-blue-background-stockpack-adobe-stock
Human brain digital illustration. Electrical activity, flashes and lightning on a blue background.

An Alternative to the Tractable Cognition Thesis

The Tractable Cognition Thesis presents us with a gap in the logic when it comes to NP-Complete problems. How can we solve for it?

The Tractable Cognition Thesis is the proposal that all processes in the brain can be modeled by a polynomial time algorithm. This includes situations where the brain solves problems that are within NP-Complete domains. In the latter situation, it is assumed the brain is only solving a subset of the NP-Complete domain where the problems can be solved with a polynomial time algorithm. With these assumptions in place, the overall implication is that there is a specific polynomial time algorithm that can emulate every process in the brain. However, there is a gap in the logic when it comes to NP-Complete problems. It is well known that humans solve many problems that are in the general case NP-Complete. Route planning, Read More ›

3d-illustration-roboter-auge-stockpack-adobe-stock
3D Illustration Roboter Auge

Move Over Turing and Lovelace – We Need a Terminator Test

More research should be spent on a Terminator test to mitigate the threat of an unfriendly, all-powerful artificial intelligence

What we really need is not a Turing test or a Lovelace test, but a Terminator test. Just imagine. If we create an all-powerful artificial intelligence, we cannot assume it will be friendly. We cannot guarantee anything about the AI’s behavior due to something known as Rice’s theorem. Rice’s theorem states that all non-trivial semantic properties of programs are undecidable. Benevolence is certainly a non-trivial semantic property of programs, which means we cannot guarantee benevolent AIs. Therefore, what we really need is a way to distinguish the all-powerful artificial intelligence from human intelligence, so we can protect ourselves from humanized mass murdering robots. Let us think about this in terms of test errors. When we perform a test on some Read More ›

coins-on-a-gray-table-stockpack-adobe-stock
Coins on a gray table

Why Is Randomness A Good Model, But Not A Good Explanation?

After all, random processes are used all the time to model things in science

The previous article I wrote about randomness proved quite controversial. After all, random processes are used all the time to model things in science. How can I say randomness is not a scientific explanation? Let me first make a distinction between a model and an explanation. A model shows us how some physical thing operates, but it does not explain the cause of the thing. An explanation, on the other hand, tries to explain the cause. But surely if we can effectively model something with randomness, then randomness must also be part of the causal explanation for the thing? Well, not so fast. Let’s look at how we model randomness with computers. Computers themselves are not random in the slightest. Read More ›

a-hand-rolling-dice-stockpack-adobe-stock
a hand rolling dice

Randomness is Not a Scientific Explanation

We can never know if anything is truly random

It is common in the sciences to claim aspects of our universe are random: In evolution, mutations are random. In quantum physics, the wave collapse is random. In biology, much of the genome is random. In business theory, organizational ecologists state new ideas are random. There is a general idea that everything new has its origins in randomness. This is because within our current philosophy of science, the two fundamental causes in our universe boil down to randomness and necessity. Since necessity never creates anything new, then by process of elimination the source of newness must be randomness. Similar to how the ancient Greeks believed the universe originated from chaos. Here’s the irony of the view that whatever is unique Read More ›