Mind Matters Natural and Artificial Intelligence News and Analysis
question mark letters angle
question mark letters

The Most “Woke” Company Could Contribute Most to Online Bias

Google has got to be one of the "Wokest" companies but there is a lesson in how Timnit Gebru got fired
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Here’s a paper worth revisiting, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” (March 3, 2021), if only for the principal author’s trouble associated with publishing it.

Although Google had hired Timnit Gebru to do ethics consultation, an executive, Megan Kacholia demanded that she remove all suggestion of her affiliation. In the ensuing uproar, Gebru ended up no longer employed there.

The paper in question was, in Gebru’s mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software — most famously exemplified by a system called GPT-3 — that was stoking excitement in the tech industry. Google’s own version of the technology was now helping to power the company’s search engine. Jeff Dean, Google’s revered head of research, had encouraged Gebru to think about the approach’s possible downsides. The paper had sailed through the company’s internal review process and had been submitted to a prominent conference. But Kacholia now said that a group of product leaders and others inside the company had deemed the work unacceptable, Gebru recalls. Kacholia was vague about their objections but gave Gebru a week to act.

Tom Simonite, “What Really Happened When Google Ousted Timnit Gebru” at Wired (June 8 2021)

GPT-3 and similar technologies are algorithms that can create (or maybe detect) fake news. Because they scarf up thousands of documents to produce various types of copy, including sentence completion, they can sound like the language equivalent of Muzak before sliding into nonsense. But Gebru had another worry: GPT-3 and its competitors might also be scarfing up various types of bias from the past.

The algorithm doesn’t think; it just generates stuff. Here’s an example: Arbitrarily associating Muslims with violence (66 times out of 100):

Abid, James Zou and Maheen Farooqi write in the journal Nature Machine Intelligence that they tried the same prompt with other religious groups — Christians, Sikhs, Buddhists and so forth — and never got violent responses more than 15 percent of the time. Atheists averaged 3 percent. Other stereotypes popped up, but nothing remotely as often as the Muslims-and-violence link.

Ned Potter, “Two Natural-Language AI Algorithms Walk Into A Bar…” at IEEE Spectrum The paper is open access.

That disproportion could be an outcome of the amount of online documentation of significant events like 9-11 or the Arab Spring.

But that’s just the trouble. Leaving the judgment to an algorithm is asking for trouble. There is no necessary relationship between what gets talked about a lot online and its prevalence in everyday life. As a writer at IEEE Spectrum puts it:

Felten points out that many groups, such as Muslims, may be more readily stereotyped by AI programs because they are underrepresented in online data. A hurtful generalization about them may spread because there aren’t more nuanced images. “AI systems are deeply based on statistics. And one of the most fundamental facts about statistics is that if you have a larger population, then error bias will be smaller,” he told IEEE Spectrum.

Ned Potter, “Two Natural-Language AI Algorithms Walk Into A Bar…” at IEEE Spectrum The paper is open access.

In an earlier story, Tom Simonite noted,

The furor gave the paper that catalyzed Gebru’s sudden exit an aura of unusual power. It circulated in AI circles like samizdat. But the most remarkable thing about the 12-page document, seen by WIRED, is how uncontroversial it is. The paper does not attack Google or its technology and seems unlikely to have hurt the company’s reputation if Gebru had been allowed to publish it with her Google affiliation.

Tom Simonite, “Behind the Paper That Led to a Google Researcher’s Firing” at Wired (August 8, 2020)

Maybe we should start paying more attention to Google’s worries. Vast powers are being given over to an algorithm that cannot, by definition, have common sense. It’s hard to see how that is a recipe for social peace.


You may also wish to read:

Google’s leading AI ethics researcher fired, amid controversy. Her research team targeted Google’s “cash cow”: advertising. Timnit Gebru joins a number of ex-Googlers forced out by such issues in recent years. Her paper, while troubling, should have been given an honest hearing.

and

Another AI ethics head at Google gets fired over diversity issues. The AI ethics team and Google management may have very different ideas about what “ethics” means. To the AI ethics team, diversity was the corporate goal. Google management may see it more as highly desirable but not at the expense of profits or security.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

The Most “Woke” Company Could Contribute Most to Online Bias