Get the FREE DIGITAL BOOK: The Case for Killer Robots
Mind Matters Reporting on Natural and Artificial Intelligence

TagComputation

Young businesswoman thinking while using a laptop at work

Jeffrey Shallit, a computer scientist, doesn’t know how computers work

Patterns in computers only have meaning when they are caused by humans programming and using them.

Materialism is a kind of intellectual disability that afflicts even the well-educated. To put it simply, machines don’t and can’t think. Dr. Shallit’s wristwatch doesn’t know what time it is. Dr. Shallit’s iPod doesn’t enjoy the music it plays or listen to his phone calls. His television doesn’t like or dislike movies. And his computer doesn’t, and can’t, think.

Read More ›
mental mind sport motivation concept of young handsome strong man with beard wearing black jersey concentration relaxation in sport gym

The Mind Is the Opposite of a Computer

Matthew Cobb, a materialist, only scratches the surface when he explains why your brain is not a computer

Mental activity always has meaning—every thought is about something. Computation, by contrast, always lacks meaning in itself. A word processing program doesn’t care about the opinion that you’re expressing when you use it. In fact, what makes computation so useful is that it doesn’t have its own meaning. Because the mind always has meaning and computation never does, the mind is the opposite of computation.

Read More ›
allec-gomes-9xpnmt41NKM-unsplash

Edward Feser on Neurobabble and Remembering the Right Questions

Edward Feser dismantles many of the simplistic reads of contemporary neuroscience

Michael Egnor hosts a captivating conversation with Edward Feser, Aristotelian, prolific blogger, and philosopher of mind. Neurobabble and pop science dismissals of the mind, final causes, abstract thought, and free will each face Feser’s piercing critique.

Read More ›
Dr. Michael Egnor, M.D.

Neurosurgeon Outlines Why Machines Can’t Think

The hallmark of human thought is meaning, and the hallmark of computation is indifference to meaning.
A cornerstone of the development of artificial intelligence is the pervasive assumption that machines can, or will, think. Watson, a question-answering computer, beats the best Jeopardy players, and anyone who plays chess has had the humiliation of being beaten by a chess engine. Does this mean that computers can think as well as (or better than) humans think? Read More ›