Mind Matters Natural and Artificial Intelligence News and Analysis

Tagbias

milada-vigerova-PE8srY2bDOs-unsplash
Stones balanced on table with a pen

Can Computer Algorithms Be Free of Bias?

Bias is inevitable but it should be recognized and admitted

Gregory Coppola’s revelations about Google’s politically biased search engine shone a spotlight on how algorithms are written.

Read More ›
nathan-dumlao-741942-unsplash

AI: Think About Ethics Before Trouble Arises

A machine learning specialist reflects on Micah 6:8 as a guide to developing ethics for the rapidly growing profession
To love mercy sometimes means to give up efficiency. It could mean losing a few points of model accuracy by refusing to take into account features that invade privacy or are proxies for race, leading to discriminatory model behavior. But that’s OK. The merciful are willing to give up some of their rights and advantages so they can help others.   Read More ›
kobu-agency-798655-unsplash

Did AI teach itself to “not like” women?

No, the program did not teach itself anything. But the situation taught the company something important about what we can safely automate.

Back in 2014, it was a “holy grail” machine learning program, developed in Scotland, that would sift through online resumes, using a one-to-five star rating system and cull the top five of 100, saving time and money. Within a year, a problem surfaced: It was “not rating candidates for software developer jobs and other technical posts in a gender-neutral way.”

Read More ›
jehyun-sung-477894-unsplash

GIGO alert: AI can be racist and sexist, researchers complain

Can the bias problem be addressed? Yes, but usually after someone gets upset about a specific instance.

From James Zou and Londa Ziebinger at Nature: When Google Translate converts news articles written in Spanish into English, phrases referring to women often become ‘he said’ or ‘he wrote’. Software designed to warn people using Nikon cameras when the person they are photographing seems to be blinking tends to interpret Asians as always blinking. Word embedding, a popular algorithm used to process and analyse large amounts of natural-language data, characterizes European American names as pleasant and African American ones as unpleasant. Now where, we wonder, would a mathematical formula have learned that? Maybe it was listening to the wrong instructions back when it was just a tiny bit? Seriously, machine learning, we are told, depends on  absorbing datasets of Read More ›