
AGI, the Halting Problem and the Human Mind
Pat Flynn continues the conversation with Dr. Winston EwertAccording to Ewert, Ray Kurzweil’s famed Singularity is not possible because AI cannot create an intelligence greater than itself.
Read More ›
According to Ewert, Ray Kurzweil’s famed Singularity is not possible because AI cannot create an intelligence greater than itself.
Read More ›
Ewert also notes that recognizing human cognition as, in part, an algorithm raises deeper questions about the origin and sophistication of that algorithm.
Read More ›
In an open-access paper last year at the Journal of Artificial Intelligence Research, a research group concluded that a computer superintelligence, if developed, could not be contained. It would be a HAL 9000 that couldn’t just be turned off. Science writer David Nield explains: The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze (and control). But if we’re unable to comprehend it, it’s impossible to create such a simulation. Rules such as ‘cause no harm to humans’ can’t be set if we don’t understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer Read More ›

In last week’s podcast, “The Chaitin Interview V: Chaitin’s Number,” Walter Bradley Center director Robert J. Marks continued his conversation with mathematician Gregory Chaitin, best known for Chaitin’s unknowable number. One thing they discussed was the usefulness of philosophy, with Chaitin saying that if he had had to do practical work 60 years ago, there wouldn’t be practical research today based on the Omega number. But then they turned to the question of whether the unknowable number could prove Goldbach’s famous Conjecture: This portion begins at 17:17 min. A partial transcript, Show Notes, and Additional Resources follow. Robert J. Marks (pictured): The poster problem for the Turing halting problem, is Goldbach’s Conjecture, which says that every even number can be Read More ›

In last week’s podcast,, “The Chaitin Interview V: Chaitin’s Number,” Walter Bradley Center director Robert J. Marks continued his conversation with mathematician Gregory Chaitin( best known for Chaitin’s unknowable number) on a variety of things mathematical. Last time, they looked at whether the unknowable number is a constant and how one enterprising team has succeeded in calculating at least the first 64 bits. This time, they look at the vexing halting problem in computer science, first identified by computer pioneer Alan Turing in 1936: https://episodes.castos.com/mindmatters/Mind-Matters-128-Gregory-Chaitin.mp3 This portion begins at 07:16 min. A partial transcript, Show Notes, and Additional Resources follow. Robert J. Marks: Well, here’s a question that I have. I know that the Omega or Chaitin’s number is based Read More ›

In a recent article, I argued that computers are not, and never can become smarter. An insightful reader wrote to ask, “What if smartness is defined by speed?” This is a good point. The debate revolves around the definition of “smart.” and if we define “smart” as “fast”, then since computers are certainly getting faster they will necessarily become smarter. Such a definition has intuitive appeal. Think of the world’s best chess player versus a beginner. One of the big distinctions is the chess expert will choose a good move more quickly than a beginner, and in general will play faster than a beginner. As such, play speed demonstrates a certain level of intelligence on the part of the player. Read More ›

When studying computer science a student invariably learns about the infamous halting problem. The problem states there is no general algorithm that can determine for every deterministic computer program whether that program will halt or not. This struck me as absurd when I first learned of the problem. Surely a whizkid like myself could design a simple algorithm to track the program’s memory and catch when it started repeating itself and determined it would not halt. Once convinced the problem was indeed provably unsolvable, I then thought the problem must show that humans are not computers. This is because it seems intuitive that for every program, if I watch it enough and think about it carefully enough, I should be Read More ›

Some hope that a move to quantum computing—qubits instead of bits, analog instead of digital—will work wonders, including the invention of the true thinking computer. In last week’s podcast, futurist George Gilder and computer engineer Robert J. Marks looked at, among other things, what’s really happening with quantum computing: https://episodes.castos.com/mindmatters/Mind-Matters-108-George-Gilder.mp3 (The quantum computing discussion begins at 15:04.) Robert J. Marks: What’s your take on quantum computing? It seems to me that there’s been glacial progress in the technology. George Gilder (pictured): I think quantum computing is rather like AI, in that it moves the actual problem outside the computational process and gives the illusion that it solved the problem, but it’s really just pushed the problem out. Quantum computing is Read More ›

Communism has been the target of many criticisms. The strongest deal with the mismatch between central planning and individuals’ desires for their lives and with the horrific human rights record of communist nations. Some scholars place the toll in human life due to communism at above 100 million in the 20th century. Those are criticisms of the practicality and ethics of communism. But is it also intrinsically flawed at a fundamental mathematical level? It turns out that the answer is yes. The basic idea behind central planning is this: If the central government makes most decisions that, in a freer society, individuals or small communities would make for themselves, more efficiency will follow—and, as a result, more prosperity. It doesn’t Read More ›

At first, “shirts without stripes” might not seem like much of an issue but it turns out that many important and interesting problems for computers fundamentally reduce to this “halting problem.” And understanding human language is one of these problems.
Read More ›
Even with infinite data, an AI cannot make the same quality inferences as a human can with finite data.
Read More ›
Not only is it valid to ask whether artificial intelligence is impossible but the argument can be pursued on a scientific basis with quantifiable, empirical evidence.
Read More ›
My primary reason for doubting that AI can match human intelligence is that the difference between mind and machine is a difference of kind, not of quantity. Understanding the distinction will help us exploit the abilities of each to their maximum potential.
Read More ›
Recently, researchers discovered that fruit flies use a filter similar to a computer algorithm to assess the odors that help them find fruit, only the flies’ tools are more sophisticated: When a fly smells an odor, the fly needs to quickly figure out if it has smelled the odor before, to determine if the odor is new and something it should pay attention to,” says Saket Navlakha, an assistant professor in Salk’s Integrative Biology Laboratory. “In computer science, this is an important task called novelty detection. Computers use a Bloom filter for that, Navlakha, an integrative biologist, explains: When a search engine such as Google crawls the Web, it needs to know whether a website it comes across has previously Read More ›

A common objection to Bartlett’s idea is that humans cannot be halting oracles because we embed any unsolvable math problem as the halting condition for a loop and a human cannot tell us whether the loop will halt or not. This objection misses the fact that there is a range of oracles between plain Turing machines and a complete halting oracle.
Read More ›