Santa Fe Prof Dissects End-of-World Super-AI Claims
There seems to be little communication, she notes, between people concerned about sci-fi AI risks and people concerned about predictable everyday risksSanta Fe Institute professor of complexity Melanie Mitchell takes issue — in a gentle way — with those who warn about the dangers of superintelligent machines (AI alignment) destroying us all: In one scenario, for example, Oxford Future of Humanity Institute’s Nick Bostrom developed a scenario by which a super AI, told to make paper clips, might use up the world’s resources in doing so. Her comment: To many outside these specific communities, AI alignment looks something like a religion — one with revered leaders, unquestioned doctrine and devoted disciples fighting a potentially all-powerful enemy (unaligned superintelligent AI). Indeed, the computer scientist and blogger Scott Aaronson recently noted that there are now “Orthodox” and “Reform” branches of the AI alignment Read More ›