In this week’s podcast, “Can Computers Think?”, Michael Egnor continued his discussion with philosopher and computer programmer Bernardo Kastrup. As a scientist, Bernardo has worked for The European Organization for Nuclear Research and for Phillips Research Laboratories, and has authored many academic papers and books. This week, they look at a big question, “Will computers ever be conscious?”. But Egnor brought up an even bigger one: “What happens to our consciousness after we die?” As a scientist, Kastrup has worked for The European Organization for Nuclear Research and for Phillips Research Laboratories and has authored many academic papers and books. He is a leading advocate of cosmopsychism, the idea that intelligence did not randomly evolve somehow to help life forms Read More ›
A day does not go by without a news article reporting some amazing breakthrough in artificial intelligence. In fact, progress in AI has been so steady that some futurists, such as Ray Kurzweil, project current trends into the future and anticipate the headlines of tomorrow. Consider some developments from the world of technology: 2004 DARPA sponsors a driverless car grand challenge. Technology developed by the participants eventually allows Google to develop a driverless automobile and modify existing transportation laws. 2005 Honda’s ASIMO humanoid robot is able to walk as fast as a human, delivering trays to customers in a restaurant setting. The same technology is now used in military robots. 2007 Computers learned to play a perfect game of checkers, Read More ›
Last October, Jay Richards, author of The Human Advantage, caught up with Bradley Center director Robert J. Marks, a Baylor University computer engineering prof, at COSM 2019 to ask, what are our cheat-death chances? They were responding to futurist Ray Kurzweil’s heady claims made at the conference that we will merge with computers by 2045 and live on as AI. Richards and Marks reflected on Kurzweil’s claims and the thoughts of the panel responding to them. Here’s a partial transcript: Jay Richards: He’s (Kurzweil, below right) very much a sort of, I’d say, a techno-optimist. And in fact, he sort of thinks we’re going to get brain scans and upload ourselves, whereas the panel… Though I know there was a Read More ›
Computer scientist Selmer Bringsjord recalls, “I remember asking James Moor, the Dartmouth professor who’s written quite a bit on AI: “You know. Jim, you really are a true believer in this stuff but can you tell me how much time you’re willing to give these AI people? I mean, if we give them another thousand years, and we still don’t have cognition as I’ve characterized it… Are you going to be skeptical now?” He was, I suppose, as an academic, predictably clever and evasive, but the bottom line is, we don’t have this cognition captured.
There are some who say immortality is available if we can upload our minds to a computer. This presupposes our minds are computable and can be duplicated by a computer. Are our minds computable? Robert J. Marks and Dr. Selmer Bringsjord discuss consciousness, cognition, and artificial intelligence. Show Notes 00:39 | Introducing Selmer Bringsjord, Professor — Rensselaer Polytechnic Institute (RPI) Read More ›
The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behaviour indistinguishable from a human. Many think that Turing’s proposal for intelligence, especially creativity, has been proven inadequate. Is the Lovelace test a better alternative? Robert J. Marks and Dr. Selmer Bringsjord discuss the Turing test, the Lovelace test, and machine Read More ›
If Moore’s Law fails, AI may settle in as a part of our lives like the automobile but it will not really be the Ruler of All except for those who choose that lifestyle. Even so, a belief that we will, for example, merge with computers by 2045 (the Singularity) is perhaps immune to the march of mere events. Entire arts and entertainment industries depend on the expression of such beliefs.
Computers are very effective but they operate with a very limited set of causal abilities. Humans work from an entirely different set of causal abilities. Uploading your brain to a computer is not a question of technology. It can’t work in principle.
The late philosopher Jerry Fodor (1935—2017) said that the reason “we’re all materialists” is that the alternatives seem even worse. Transhumanism, had he lived to see it develop, would give him pause for further reflection.
In these excerpts from the podcast, Walter Bradley Center director Robert J. Marks talks with John Lennox about an AI immortality where we are told, for example, that we won’t need tongues because we can tap right into our taste buds.
In a recent podcast of ID the Future at the COSM conference in Seattle, Catholic University business studies prof Jay Richards looks at Ray Kurzweil’s “sunny” version of strong AI (computers are smarter than us and will take over but don’t worry), as per his book, The Age of Spiritual Machines (1999) vs. the pessimistic version (“Skynet” wakes up). In a discussion with Andrew McDiarmid, Richards argues the opposite view, namely that human beings possess something beyond the purely material, something even the most powerful computers will never possess. Podcast here. Excerpts: Jay Richards: (08:45) If you are a materialist who thinks we are purely the result of these blind, material processes, you have something to worry about [with computers Read More ›
George Montañez, Assistant Professor of Computer Science at Harvey Mudd College, took issue with Kurzweil’s claim that AlphaGoZero needed no instructions to beat humans at the game of Go: “For a system like this to work, a human must define the incentive structure, also encoding the assumptions.” The sheer power of a computing system does not cause it to do anything at all.
Natural and artificial intelligence can be discussed from a number of viewpoints, including philosophically, technically and theologically. Robert J. Marks and Dr. John Lennox address AI from all of these points of view today on Mind Matters. Show Notes 01:10 | Introducing Dr. John Lennox 03:33 | Bacon and Erdős numbers 05:38 | Should we fear artificial intelligence? 06:10 | Read More ›
Advocates point to the success of Kurzweil’s past predictions as evidence that his Singularity is indeed Near, as his 2005 book predicts or Nearer, as his forthcoming one (June 2020) does. But questions bubbled to the surface.
He believes that the merger will eventually make the whole universe intelligent. Kurzweil’s critics believe that the superintelligent computers he needs can’t exist. If the critics are correct, we have misread the AI revolution.
Show Notes 00:46 | Introduction; Dr. John Lennox 01:47 | Trans-humanism 03:00 | Kurzweil and Human Enhancement 06:24 | Chips and Biometrics 07:35 | Trans-humanism and Gnosticism 09:25 | Nothing New Under The Sun 11:15 | More Than Exceptional 13:50 | Forthcoming Book 14:30 | Economy and Personal Security
Those proclaiming that exclusive truth lives totally in naturalism are constrained to a sadly narrow view of the world. Some naturalists have put their faith in AI and have founded the AI Church. They may think they are doing something new and cutting edge, but as Solomon said in Ecclesiastes, there is nothing new under the sun (1:9).
Because Moore's law is an exponential law, the numbers multiply rapidly and we could hit the physical limit rather suddenly. Current indications are that Moore’s law’s speed has already slowed or even ceased to be a true description of the IT industry today. Read More ›