Mind Matters Natural and Artificial Intelligence News and Analysis

TagMelanie Mitchell

silver paper clips
An extreme close up image of metal paper clips

Santa Fe Prof Dissects End-of-World Super-AI Claims

There seems to be little communication, she notes, between people concerned about sci-fi AI risks and people concerned about predictable everyday risks

Santa Fe Institute professor of complexity Melanie Mitchell takes issue — in a gentle way — with those who warn about the dangers of superintelligent machines (AI alignment) destroying us all: In one scenario, for example, Oxford Future of Humanity Institute’s Nick Bostrom developed a scenario by which a super AI, told to make paper clips, might use up the world’s resources in doing so. Her comment: To many outside these specific communities, AI alignment looks something like a religion — one with revered leaders, unquestioned doctrine and devoted disciples fighting a potentially all-powerful enemy (unaligned superintelligent AI). Indeed, the computer scientist and blogger Scott Aaronson recently noted that there are now “Orthodox” and “Reform” branches of the AI alignment Read More ›

concept-creative-idea-and-innovation-hand-picked-wooden-cube-block-with-head-human-symbol-and-light-bulb-icon-stockpack-adobe-stock
Concept creative idea and innovation. Hand picked wooden cube block with head human symbol and light bulb icon

Computer Prof: We Can’t Give Machines Understanding of the World

Not now, anyway. Melanie Mitchell of the Santa Fe Institute finds that ever larger computers are learning to sound more sophisticated but have no intrinsic knowledge

Last December, computer science prof Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans (2019), let us in on a little-publicized fact: Despite the greatly increased capacity of the vast new neural networks. they are not closer to actually understanding what they read: The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding. Consider what it means to understand “The sports car passed the mail truck because it was going slower.” You need to know what sports cars and mail trucks are, that cars can “pass” one another, and, at an even more basic level, that vehicles are objects that Read More ›

matrix eye code.jpg
Digital eye. Green matrix background. Concept of Artificial Intelligence

Researcher: Fear of AI Caused by Four Common Misconceptions

AI isn't panning out the way so many popular media articles predicted and there are reasons for that, a computer science prof says

As part of the Demystifying AI series, software engineer Ben Dickson, founder of TechTalks, tackles some of the most serious misunderstandings he thinks the public has about AI. For example, while AI can win in board games, where the map is the territory, “some of the early goals of AI like housekeeper robots and self-driving cars continue to recede as we approach them.” So why isn’t AI panning out the way so many popular media articles predicted? Dickson points to a recent article at ArXiv by professor Melanie Mitchell of the Santa Fe Institute, author of Artificial Intelligence: A Guide For Thinking Humans (2019): The year 2020 was supposed to herald the arrival of self-driving cars. Five years earlier, a Read More ›

jose-maria-garcia-garcia-656379-unsplash

Machines just don’t do meaning

And that, says a computer science prof, is a key reason they won't compete with humans
Human understanding is grounded, as Prof. Mitchell says, in common-sense knowledge about how the world works and why things matter. Researchers have not been able to transfer this understanding to AI but she worries that many teams are moving ahead with projects that require such ability for safety. Read More ›