Mind Matters Natural and Artificial Intelligence News and Analysis

TagAmbiguity

robot-close-up-stockpack-adobe-stock.jpg
Robot Close Up

AI Is Not Nearly Smart Enough to Morph Into the Terminator

Computer engineering prof Robert J. Marks offers some illustrations in an ITIF think tank interview

In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D.Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. It may be downloaded free here. In this second part ( here’s Part 1), the discussion (starts at 6:31) turned to what might happen if AI goes “rogue.” The three parties agreed that AI isn’t nearly smart enough to turn into the Terminator: Jackie Whisman: Well, opponents of so-called killer robots, of course argue that the technologies can’t be…

close-up-view-of-robot-playing-chess-selective-focus-stockpack-adobe-stock.jpg
close-up view of robot playing chess, selective focus

Bingecast: Robert J. Marks on the Limitations of Artificial Intelligence

Robert J. Marks talks with Larry L. Linenschmidt of the Hill Country Institute about nature and limitations of artificial intelligence from a computer science perspective including the misattribution of creativity and understanding to computers. Other Larry L. Linenschmidt podcasts from the Hill Country Institute are available at HillCountryInstitute.org. We appreciate the permission of the Hill Country Institute to rebroadcast this…

3d rendered illustration of karate dojo background. Karate school is out of focus to be used as a photographic backdrop.

What Did the Computer Learn in the Chinese Room? Nothing.

Computers don’t “understand” things and they can’t handle ambiguity, says Robert J. Marks

Larry L. Linenschmidt interviews Robert J. Marks on the difference between performing a task and understanding the task, as explained in philosopher John Searle’s famous “Chinese Room” thought experiment.

Read More ›
jelleke-vanooteghem-8Arw-cEpt-Q-unsplash

The Unexpected and the Myth of Creative Computers – Part II

Robert J. Marks talks with Larry L. Linenschmidt of the Hill Country Institute about the misattribution of creativity and understanding to computers. This is Part 2 of 2 parts. Other Larry L. Linenschmidt podcasts from the Hill Country Institute are available at HillCountryInstitute.org. We appreciate the permission of the Hill Country Institute to rebroadcast this podcast on Mind Matters. Show…

elephant mask beautiful young hipster woman
elephant mask beautiful young hipster woman in the city

AI Is No Match for Ambiguity

Many simple sentences confuse AI but not humans

Because computers lack common sense, they cannot interpret statements that assume a background of general knowledge, as the Winograd Schema challenges show.

Read More ›