Mind Matters Natural and Artificial Intelligence News and Analysis
chef-cook-food-with-fire-at-kitchen-restaurant-cook-with-wok-at-kitchen-stockpack-adobe-stock
Chef cook food with fire at kitchen restaurant. Cook with wok at kitchen.

If AI Is Like Fire, Let’s Not Get Left With Its Ashes

In a new book, Georgetown University researchers examine what can go right and wrong with adapting our culture to artificial intelligence
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Ben Buchanan and Andrew Imbrie, Georgetown University researchers on loan to the U.S. government, think that the invention of artificial intelligence is like the invention of fire. It can bring great benefits — but comes with unavoidable great risks that are equally a consequence of its power to help us.

They are honest about AI’s failures, left unattended. As authors of The New Fire: War, Peace, and Democracy in the Age of AI (MIT Press, 2022), they offer some examples from everyday life that certainly give pause for thought:

Despite its extraordinary power, AI is far from perfect. Bias insidiously sneaks into AI systems, especially when they learn from data sets of human decisions. The real-world consequences can be severe. Amazon had to scrap a resume screening tool after it learned to systematically discriminate against women. Another algorithm regularly denied healthcare to people of color. Similarly, facial recognition technologies perform far worse for diverse groups; in the United States, police have arrested innocent Black Americans solely on the basis of an incorrect facial recognition match.

Nor can AI explain how it reaches its conclusions. Like a lazy middle school student, even when the machine gets the right answer, it rarely shows its work, making it harder for humans to trust its methods. Worse still, this opacity can hide the instances when AI systems optimize for a goal that is not quite what their human creators had in mind. For example, one system designed to detect pneumonia in chest X-rays discovered that X-rays from one hospital were more likely than others to exhibit pneumonia because that hospital usually had sicker patients. The machine learned to look for the X-ray’s hospital of origin rather than at the X-ray itself. Another system was designed to identify cancerous skin lesions. It trained on a set of images from dermatologists who often used a ruler to measure lesions they thought might be cancerous. The AI system recognized that the presence of a ruler correlated with the presence of cancer, so it started checking if a ruler was present rather than focusing on the characteristics of the lesion.

In both of these cases, alert human operators noticed the failures before the systems were deployed, but it is impossible to know how many cases like these have gone undetected and how many more will go undetected in the future.

Next Big Idea Club, “Forget robots taking jobs, these researchers compare AI to fire. Here’s how we need to tend it” at Fast Company (July 15, 2022)

Any proposal to eliminate the “human in the loop” is wisely opposed.


You may also wish to read: Soylent AI is… people! OpenAI advertises itself as AI-powered, but at the end of the day, the system is human-powered. When I asked if my questions were being answered by humans or AI, GPT-3 responded, “These questions are being answered by humans.” (Eric Holloway)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

If AI Is Like Fire, Let’s Not Get Left With Its Ashes