Mind Matters Natural and Artificial Intelligence News and Analysis
cute-and-little-robot-helper-with-artificial-intelligence-raising-hand-generative-ai-stockpack-adobe-stock
Cute and little robot helper with artificial intelligence raising hand. Generative AI
Image licensed via Adobe Stock

Google Gemini Presents a Past That Never Happened

You can't trust a bot to give you a history lesson, turns out.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Google recently released a text-to-image AI generator called Gemini. Originally hyped and hailed, the program swiftly prompted backlash and critique when…well, when people started doing crazy things with it and when the AI seemingly refused to generate images of Caucasians.

Among the worst travesties Gemini committed was generating images of Black people dressed in Nazi uniforms. Notorious for its racist ideology, the Third Reich would have never included Black men in its ranks, of course, but can you blame a bot for not knowing basic historical context? Well, maybe someone should be held accountable.

In addition, Gemini produced female 19th-century senators and depicted founding fathers as minorities, without even being prompted to add those extraneous immutable characteristics. Nellie Bowles, writer for The Free Press, put it well in her Friday roundup article. She notes that while she wishes 19th-century Congress looked like Gemini’s illusion of diversity and gender equality, it simply wasn’t. This distorts history and trivializes the very real obstacles women and racial minorities overcame. Bowles writes,

Wouldn’t you want to show past injustice accurately? Like, I wish this was true of America’s historical senate, but it’s not and I’m not sure I want Google to confidently tell me that it is:

-Nellie Bowles

Google apologized for the mishaps and frantically started to backtrack on the program, promising limits and bias filters. However, as Chris Gilliard writes, there is a “deeper problem,” which is that AI is not a mind, and so can never understand what it generates. Even speaking negatively against AI as if it’s responsible for its own insanity anthropomorphizes the technology, making us subconsciously think this thing is a sentient entity. Many people use AI image generators not so much to get accurate presentations of human beings or history, but to generate shock and awe, or to transgress taboos. So long as that remains the case, safety guards are needed but will be limited in guarding us all from AI nonsense. Gilliard writes,

Google may try to inject Gemini with what I would call “synthetic inclusion,” a technological sheen of diversity, but neither the bot nor the data it’s trained on will ever comprehensively mirror reality. Instead, it translates a set of priorities established by product developers into code that engages users — and it does not view them all equally.

-Chris Gilliard, The Deeper Problem With Google’s Racially Diverse Nazis – The Atlantic

As author and journalist Walter Kirn tweeted recently: “Gemini AI is inventing damaging stories about people and figures I know. It is an automated false-witness weapon.”

As the fake and real continue to meld in the online sphere, people may become desperate for simple accuracy and truth. Maybe the outcry for truth will lead to a big reckoning for these companies hyping AI image generators. In the meantime, we will have to be more intentional about safeguarding truth from the lies and keep insisting that we understand the difference.


Peter Biles

Writer and Editor, Center for Science & Culture
Peter Biles graduated from Wheaton College in Illinois and went on to receive a Master of Fine Arts in Creative Writing from Seattle Pacific University. He is the author of Hillbilly Hymn and Keep and Other Stories and has also written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and serves as Managing Editor of Mind Matters.

Google Gemini Presents a Past That Never Happened