Get the FREE DIGITAL BOOK: The Case for Killer Robots
Mind Matters Reporting on Natural and Artificial Intelligence

CategoryEthics

Remote jobs for public health concept. Computers, pills and medical mask

COVID-19: Do Quarantine Rules Apply to Mega-Geniuses?

How did Elon Musk, who has a cozy relationship with China, get his upscale car factory classified as an essential business during the pandemic?

If we are going to hold some people up as business icons, why should it be those who—in the present COVID-19 troubles—have relations with China that necessarily raise questions?

Read More ›
Businessman with psychopathic behaviors

All AI’s Are Psychopaths

We can use them but we can’t trust them with moral decisions. They don’t care why

Building an AI entails moving parts of our intelligence into a machine. We can do that with rules, (simplified) virtual worlds, statistical learning… We’ll likely create other means as well. But, as long as “no one is home”—that is, the machines lack minds—gaps will remain and those gaps, without human oversight, can put us at risk.

Read More ›
Streetcar in Toronto, Ontario, Canada

The “Moral Machine” Is Bad News for AI Ethics

Despite the recent claims of its defenders, there is no way we can outsource moral decision-making to an automated intelligence

Here’s the dilemma: The Moral Machine (the Trolley Problem, updated) feels necessary because the rules by which we order our lives are useless with automated vehicles. Laws embody principles that we apply. Machines have no mind by which to apply the rules. Instead researchers must train them with millions of examples and hope the machine extracts the correct message… 

Read More ›
Joyful happy boy hugging a robot

Can Robots Be Programmed To Care About Us?

Some researchers think it is only a matter of the right tweaks

The quest is a curious blend of forlorn hope fueled by half-acknowledged hype and resolute denial of the most serious problems. Also by sometimes systematic confusion as to what, precisely, we are talking about.

Read More ›
Babys playing together.

Are Infants Born Kind? New Research Says Yes

The trouble is, the research is haunted by conflicting definitions of altruism

If human infants show apparent intellectual qualities like compassion earlier than we might have expected but chimpanzees don’t, we must accept that humans are fundamentally different from chimpanzees. Conflicting definitions of altruism cloud the picture.

Read More ›
Beautiful bored people bored isolated on pink background

Are Facial Expressions a Clear, Simple Basis for Hiring Decisions?

Marketing AI to employers to analyze facial expressions ignores the fact that correlation is NOT causation

Have you heard of the Law of the Instrument? It just means, to quote one formulation, “He that is good with a hammer tends to think everything is a nail.” All any given problem needs is a good pounding. This is a risk with AI, as with amateur carpentry. But with AI, it can get you into more serious trouble. Take hiring, for instance.

Read More ›
Photo by Eugene Triguba

AI has changed our relationship to our tools

If a self-driving car careens into a storefront, who’s to blame? A new Seattle U course explores ethics in AI

A free course at Seattle University addresses the “meaning of ethics in AI.” I’ve signed up for it. One concern that I hope will be addressed is: We must not abdicate to machines the very thing that only we can do: Treat other people fairly.

Read More ›
A concept of a city being hit by a weapon of mass destruction suffering terrible consequences caused by terrorism or an act of war by a hostile country launching a devastating attack with atomic bomb

What Can We Learn from History About Stopping AI Warfare?

International agreements can work, but only under certain circumstances

Historically, the key difference between the international weapons ban agreements that have been honored and the agreements that have not been honored is that the honored ones involved weapons of mass destruction (WMD). An effective ban on malicious AI requires the global community to first agree that such a form (or use) of AI would be a WMD.

Read More ›
Middle-aged man calling his attorney for legal assistance

AI in the Courtroom: Will a Robot Sentence You?

Some experts think AI might be fairer than human judgment. Others are not so sure

One Superior Court judge has warned that many cases don’t come down to information alone, which is all AI can do. Law professor David DeWolf also expresses concern about increasing dependence upon law—a form of coercion—to regulate human behavior, a choice that is irrelevant to the growth of AI in the courtroom.

Read More ›
Obstetric Ultrasonography Ultrasound Echography of a first month

Abortion Advocate Admits in a Medical Journal That Unborn Children Feel Pain

The scientific community has for decades misrepresented the straightforward science of conception and fetal development for ideological reasons

I have cared for hundreds of premature infants and it is very clear that these very young children experience pain intensely. An innocuous needlestick in the heel to draw small amount of blood would ordinarily not be particularly painful for an adult. But a tiny infant will scream at such discomfort.

Read More ›
Photo by Clem Onojeghuo

Can We Outsource Hiring Decisions to AI and Go for Coffee Now?

I would have fired any of my hiring managers who demonstrated characteristic AI traits immediately. So why do we tolerate it coming from a machine?

With historically low unemployment, employers are tempted to reduce costs and speed up the process using artificial intelligence (AI) systems. These systems might help but, for best results, let’s have a look at the problems they can’t solve and some that they might create.

Read More ›
Soldiers are Using Drone for Scouting During Military Operation in the Desert.

Book at a Glance: Robert J. Marks’s Killer Robots

What if ambitious nations such as China and Iran develop lethal AI military technology but the United States does not?

Artificial intelligence expert Robert J. Marks tackles the contentious subject of military drones in his just-published book, The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal AI. Many sources (30 countries, 110+ NGOs, 4500 AI experts, the UN Secretary General, the EU, and 26 Nobel Laureates) have called for these lethal AI weapons to be banned. Dr. Marks, a Distinguished Professor of Electrical and Computer Engineering at Baylor University, disagrees. What if ambitious nations such as China and Iran develop lethal AI military technology but the United States does not? Nations that wish to maintain independence (sovereignty), he argues, must remain competitive in military AI. (“Advanced technology not only wins wars but gives pause to…

Robot Predicting Future With Crystal Ball

Five AI Predictions to Watch in 2020

We'll check on these a year from now

Some problems experts hope AI can help with may be outside AI's capacities. Some people may simply want to believe doctored images and deepfakes, for example.

Read More ›
The mankind races - ethnic and multi ethnic - scientific model - concept

China: DNA Phenotyping Profiles Racial Minorities

In the United States, targeting minorities means political pushback; in China, no such discussion is allowed

While there is some merit to the idea that the population of a particular geographic region will have similar DNA patterns, this science comes with a host of assumptions that, when taken too far, crosses the line into pseudoscience.

Read More ›
Businessman Robot Hands Law Connection HUD Network

2019 AI Hype Countdown #7: “Robot rights” grabs the mike

If we could make intelligent and sentient AIs, wouldn’t that mean we would have to stop programming them?

AI programs are just that—programs. Nothing in such a program could make it conscious. We may as well think that if we make sci-fi life-like enough, we should start worrying about Darth Vader really taking over the galaxy.

Read More ›
thousands of umbrella in causeway bay hong kong in rainy day on august 18 2019

Can a Totalitarian State Advance AI?

China vs. Hong Kong provides a test case

George Orwell identified two characteristics of a totalitarian state that offer insight into its central intellectual weaknesses.

Read More ›
The difference between right and wrong

Will Self-Driving Cars Change Moral Decision-Making?

It’s time to separate science fact from science fiction about self-driving cars

Irish playwright John Waters warns of a time when we might have to grant moral discretion to computer algorithms, just as Christians now grant to the all-knowing but often inscrutable decrees of God. Not likely.

Read More ›
Tesla Cybertruck

Tesla’s Cybertruck Runs on … Hype?

When planning for the future, Tesla should maybe think reality, not Mad Max

The steel ball thrown at the unbreakable window broke the glass. Twice. Unfortunately, Musk had to spend the rest of the demo with a damaged car in the background.

Read More ›
Man trying to whistle

Google’s Secret Health Data Grab: The Whistleblower Talks

This is the fourth whistleblower in the last eighteen months

“The decision came to me slowly, creeping on me through my day-to-day work,” we are told, until it came down to “how could I say nothing?”

Read More ›
Big brother electronic eye concept, technologies for the global surveillance, security of computer systems and networks

Can a Big Data Program Be Society’s Crystal Ball?

Can a program with enough data on all of us predict our future?

Even fans admit that, if the program works, bad actors can use it just as easily as New Scientist’s virtuecrats.

Read More ›