Mind Matters Natural and Artificial Intelligence News and Analysis
facial-recognition-technology-to-monitor-the-population-on-b-608983195-stockpack-adobestock
Facial recognition technology to monitor the population on busy street. Generative AI
Image Credit: Iryna - Adobe Stock

The Real Threat AI Poses to Us Is Created by Widespread Abuse

In If Anyone Builds It, Yudkowsky and Soares are not really grappling with this problem
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In April of this year, teenage Adam Raine died. He died by his own hand. And he did so with the encouragement of ChatGPT.

In February of this year, teenage Elijah Heacock died as well. Again, by his own hand. He was a victim of a sextortion scam using AI-generated deep fake nudes.

Gen AI is becoming pervasive in the official system too

Sometimes with serious consequences.

In January 2020, Detroit Police wrongfully arrested Robert Williams in front of his wife and young daughters. Flawed facial recognition technology — which relies on, essentially, the same technologies that lie beneath text generative AI — mistakenly pegged Williams as the thief who stole watches from an upscale store. When police actually looked at the video footage, it became clear that Williams was not the thief, and they released him…after he had sat in a jail cell for more than 30 hours. By the way, Williams is black.

In April of this year, facial recognition technology identified Trevis Williams as “flasher,” despite the fact that he was 8 inches taller, 70 pounds heavier, and 12 miles from the crime scene. After he had spent two days in jail, authorities dismissed his case. Trevis is also black. This seems to be a pattern.

In May, Judge Michael Wilner, of California, imposed $31,000 in sanctions on two law firms for filing AI-generated briefs containing references to non-existent cases. Judge Wilner later wrote:

I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.

Banks are using generative AI to evaluate credit risk despite the embedded bias. Police departments use them to write up reports, a key document that virtually determines what happens next.

It is now nearly de rigueur for companies to pass resumes through AI analysis before they encounter human hands. Former Federal Reserve Governor Lael Brainard noted that  by 2019, AI resume analysis had already “developed a bias against female applicants, going so far as to exclude resumes of graduates from two women’s colleges.”

I could go on. Simple web searches bring up many more abuses and misuses.

Welcome to the magical world of our wonderful AI future!

Blinded by faith

These are the sort of problems that Eliezer Yudkowsky and Nate Soares ignore in  If Anyone Builds it, in favor of misguided and unsupported worry over artificial general (or super) intelligence. Not only does no one really understand what these terms mean, no one knows if, or when, it will arrive. The authors are like magicians directing our attention one way, while the real action is elsewhere. They ask us to worry about a fictional future while ignoring the damage done to real people in the here and now.

I don’t believe Yudkowsky and Soares are purposely deceptive. They really believe all this. Soares has quit funneling into his 401k because, as he says, “I just don’t expect the world to be around.” I believe they’re blinded by faith.

Artificial intelligence is a tool, nothing more. It is not magic (even if we cannot determine precisely which values in the stack of neural networks led to a result). Like any tool, used properly it can be beneficial. Used without thinking or blinded by faith, it can cause — and has caused — significant harm. Adam Reine’s parents live with that fact every day.

It’s not even clear if generative AI, given the capital costs of  immense data centers, can pay for itself. It may all collapse. Or it may not.

How best do we respond, not only to the authors’ worry over imminent destruction by superintelligent AI, but to the pervasive use of AI? Let me suggest a couple of guidelines.

Technology should be for, not against people

First, appropriate uses of technology make us better people; they don’t eliminate us.

The drive towards efficiency in all things and the dismissive view of humans held in Silicon Valley drives the effort for a technology that replaces humans. In More of Everything Forever (2025), Adam Becker summarizes the attitudes of the tech elite:

They have found a philosophy and ideology about how the world works, almost a religious faith about how the world works. That is based on very little actual information about how the world works. There’s no science to support it in a great deal of science and other things that cut against it, but they have found this way of looking at the world that convinces them that there’s a happy alignment between their own interests and the interests of humanity, and it’s just not the case. So they run around saying that they’re saving the world, like Elon Musk runs around saying that he’s trying to save the world, and what he is actually doing is lining his own pockets.

Rather than allowing ourselves to be duped by the self-serving promises of those running Silicon Valley, we should promote and engage with technology, including AI, in a manner that improves people and their lives.

Speak up for humans

“Familiarity breeds contempt.” The more used we get to something, the less we see its value. That includes the mug in the mirror we face each morning.

Humans are amazing creatures. Unlike AIs, which must be tuned to specific uses, any single human can take on a bewildering array of challenges. We may not be the best or expert at each one, but we have the ability to move seamlessly between countless activities, even those new to us, and can achieve expertise across multiple domains. By contrast, the Valley gets excited when a robot can move items from one place to another.

Generative AI, AI-recognition systems, robots, and AI-assisted driving do feel like science fiction. They accomplish tasks similar to what humans do. But they lack so much that we take for granted.

This morning, for example, I cut my thumb — nothing serious, just a minor nick that bled a bit. I wrapped it in a bandage and continued with my tasks. Why, because my body can heal itself.

I cut myself while cleaning up from breakfast. Restoring, in a sense, my power supply. No wires or batteries required.

Valley tech denizens tell you that our brains are painfully slower than computers. But, not only does research show that brains hold the upper hand, it’s not even clear that the comparison makes sense. It is the human brainthat developed the mathematics, engineered the hardware, and designed the algorithms driving artificial intelligence.

We should spend more time acknowledging the amazing thing humans are — and the entire biosphere is —rather than mindlessly salivating over the glossy new toys that appear intelligent.

Forget the Terminator… he’s not coming

If I were Nate Soares, I’d immediately start reinvesting in my 401k. No superintelligent AI is going to destroy all humanity.

However, as long as we put these tools to inappropriate use without accepting their limits, we do run the risk of ruining ourselves. That can happen with any other tool we’ve created too.

Let’s keep our eyes open and not be distracted by the fantasy worries that Yudkowsky and Soares have created. It’s our choice. It’s our future.

Here are the first three parts of my look at the arguments in this thought-provoking new book:

Fearing the Terminator, Missing the Obvious. In Part 1 of my review of the new AI Doom book, If Anyone Builds It, Everyone Dies, we look at how the authors first developed the underlying idea. By 2020, authors Yudlowsky and Soares were already Doomers but the rapid success of ChatGPT and similar models heightened their worries.

Fearing the Terminator: Does current tech warrant the doomsaying? People will worry less if they understand why the text generation programs not only do not think but in fact cannot think. Bottom line: Text generative AIs do not capture meaning. They capture relationships which approximate meaning well enough to be useful.

and

Can an AI really develop a mind of its own? Specifically, can an AI develop a mind with its own goals and desires, capable of plans and strategies — as the authors of If Anyone Builds It believe? Even if we adopt a materialist view of the mind, I believe I can show why it is not possible for a machine to develop a mind.


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

The Real Threat AI Poses to Us Is Created by Widespread Abuse