Mind Matters Natural and Artificial Intelligence News and Analysis
Photo by Eugene Triguba

AI has changed our relationship to our tools

If a self-driving car careens into a storefront, who’s to blame? A new Seattle U course explores ethics in AI
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Before AI — and, by AI, I mean broadly any computer system that implements a decision process — tools sat there until we put them to use. AI tools, on the other hand, are created to do something before we get involved.

That shift creates a challenge: If I use a hammer and cause harm, I am clearly at fault. If someone drives a car into a store, we don’t blame the car. We blame the driver. But if the tool chooses — say your self-driving car careens into a storefront — then who’s to blame?

These ethical waters are full of murky deeps. Bias in AI is well-established. AI can be all-too-easily fooled. And, often,we have no idea why the AI made the decision it did. So how do we make ethical decisions?

Seattle University, with funding provided by Microsoft, has created a free, online course intended to train businesses on the “meaning of ethics in AI.” That’s a promising move because it’s not just the tools that create problems. The developers themselves often don’t know how to think about ethical issues. Michael Quinn, Dean of the College of Science and Engineering at Seattle University, notes,

Many people who work in tech aren’t required to complete a philosophy or ethics course in school’, said Quinn, which he believes contributes to blind spots in the development of technology. Those blind spots may have lead to breaches of public trust…”

Melissa Hellmann, “AI is here to stay, but are we sacrificing safety and privacy?” at The Seattle Times

The AI Ethics for Business course, for which I have registered, is free, online, and self-guided. The sponsors suggest that it should take no more than ten hours to complete.

Raising these issues is a good thing, provided it is done well. The thing to keep in mind is that when we create AI systems, we’re not creating autonomous, conscious, self-reflecting robots. We are creating complex systems that make data-driven choices. The training and problem data determine the outcome, which we hope will align with our goals. In a sense, we attempt to deposit into the machine a bit of how we make decisions.

This process can fail for many reasons. Perhaps the training data is too narrow. Perhaps the machine does not have access to all of the data needed or is incapable of incorporating it. Perhaps there’s a bug in the software. I could go on.

I am looking forward to seeing whether the course asks the right questions and raises the right issues. Labeling a tool “AI” does not shield it or its developers from their ethical responsibilities. Nor does it remove our responsibility to deploy and use these tools properly.

It’s easy to get lazy and claim it’s all the machine’s fault but that would be a mistake.

Among other things, it is an ethical mistake. We are the moral agents.Above all, let’s not abdicate to machines the very thing that only we can do: Treat other people fairly.


Here are some recent thoughts on ethics issues in AI from our contributors at Mind Matters News:

Can we outsource hiring decisions to AI and go for coffee now? I would have fired any of my hiring managers who demonstrated characteristic AI traits immediately. So why do we tolerate it coming from a machine? (Brendan Dixon)

Will self-driving cars change moral decision-making? It’s time to separate science fact from science fiction about self-driving cars. (Jay Richards)

The unadvertised cost of doing business with China: It’s a big market, with one Big Player, and some strange rules. (Heather Zeiger)

and

Will industry pressure loosen self-driving car tests? Right now, the regulatory agency is under pressure to accept the industry’s “softball” testing suggestions. (Brendan Dixon)


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

AI has changed our relationship to our tools