Mind Matters Reporting on Natural and Artificial Intelligence

Brendan Dixon

Two female women medical doctors looking at x-rays in a hospital.

Is AI really better than physicians at diagnosis?

The British Medical Journal found a serious problem with the studies

Of 83 studies of the performance of the Deep Learning algorithm on diagnostic images, only two had been randomized, as is recommended, to prevent bias in interpretation.

Read More ›
robots in a car plant

Will the COVID-19 Pandemic Promote Mass Automation?

Caution! Robots don’t file for benefits but that’s not all we need to know about them

I understand the panic many business leaders experience as they try to stay solvent while customers evaporate. Panic, however, is a poor teacher: AI-based automation will not only not solve all their problems, it may very well add to them. AI is not a magic box into which we can stuff them and make them disappear.

Read More ›
Self-driving electric semi truck driving on highway. 3D rendering image.

Star self-driving truck firm shuts; AI not safe enough soon enough

CEO Stefan Seltz-Axmacher is blunt about the cause: Machine learning “doesn’t live up to the hype”

Starsky Robotics was not just another startup overwhelmed by business realities. In 2019, it was named one of the world’s 100 most promising start-ups (CNBC) and one to watch by FreightWaves, a key trucking industry publication. But the AI breakthroughs did not appear.

Read More ›
Woman in medical protective mask applying an antibacterial antiseptic gel for hands disinfection and health protection during during flu virus outbreak. Coronavirus quarantine and novel covid ncov

AI Is Not Ready to Moderate Content!

In the face of COVID-19 quarantines for human moderators, some look to AI to keep the bad stuff off social media

Big social media companies have long wanted to replace human content moderators with AI. COVID-19 quarantines have only intensified that discussion. But AI is far, far from ready to successfully moderate content in an age of where virtual monopolies make single point failure a frequent risk.

Read More ›
Businessman with psychopathic behaviors

All AI’s Are Psychopaths

We can use them but we can’t trust them with moral decisions. They don’t care why

Building an AI entails moving parts of our intelligence into a machine. We can do that with rules, (simplified) virtual worlds, statistical learning… We’ll likely create other means as well. But, as long as “no one is home”—that is, the machines lack minds—gaps will remain and those gaps, without human oversight, can put us at risk.

Read More ›
Streetcar in Toronto, Ontario, Canada

The “Moral Machine” Is Bad News for AI Ethics

Despite the recent claims of its defenders, there is no way we can outsource moral decision-making to an automated intelligence

Here’s the dilemma: The Moral Machine (the Trolley Problem, updated) feels necessary because the rules by which we order our lives are useless with automated vehicles. Laws embody principles that we apply. Machines have no mind by which to apply the rules. Instead researchers must train them with millions of examples and hope the machine extracts the correct message… 

Read More ›
issue-type-bug-blame

Machines Never Lie but Programmers… Sometimes

A creative claim is floating around out there that bad AI results can arise from machine “deception”

We might avoid worrying that our artificial intelligence machines are trying to deceive us if we called it “Automated Intelligence rather than “Artificial Intelligence.”

Read More ›
Beautiful bored people bored isolated on pink background

Are Facial Expressions a Clear, Simple Basis for Hiring Decisions?

Marketing AI to employers to analyze facial expressions ignores the fact that correlation is NOT causation

Have you heard of the Law of the Instrument? It just means, to quote one formulation, “He that is good with a hammer tends to think everything is a nail.” All any given problem needs is a good pounding. This is a risk with AI, as with amateur carpentry. But with AI, it can get you into more serious trouble. Take hiring, for instance.

Read More ›
Stop Sign with damage yuliyakosolapova-DmtblAatFtk unsplash

McAfee: Assisted Driving System Is Easily Fooled

Defacing a road sign caused the system to dramatically accelerate the vehicle

Over time, machine vision will become harder to fool than the one that was recently tricked into rapid acceleration by a defaced sign. But it will still be true that a fooled human makes a better decision than a fooled machine because the fooled human has common sense, awareness, and a mind that reasons.

Read More ›
Photo by Eugene Triguba

AI has changed our relationship to our tools

If a self-driving car careens into a storefront, who’s to blame? A new Seattle U course explores ethics in AI

A free course at Seattle University addresses the “meaning of ethics in AI.” I’ve signed up for it. One concern that I hope will be addressed is: We must not abdicate to machines the very thing that only we can do: Treat other people fairly.

Read More ›
Futuristic and technological scanning of the face of a beautiful woman for facial recognition and scanning to ensure personal safety.

Teaching Computers Common Sense Is Very Hard

Those fancy voice interfaces are little more than immense lookup tables guided by complex statistics

Researchers at the Allen Institute for Artificial Intelligence (AI2) published a paper recently, deflating claims of rapid progress toward giving computers common sense.

Read More ›
Two female women medical doctors looking at x-rays in a hospital.

AI Can Help Spot Cancers—But It’s No Magic Wand

When I spoke last month about how AI can help with cancer diagnoses, I failed to appreciate some of the complexities of medical diagnosis

As a lawyer with medical training reminded us recently, any one image is a snapshot in time, a brief part of the patient’s whole story. And it’s the whole story that matters, not a single image, perhaps taken out of context.

Read More ›
Photo by OLEG PLIASUNOV

Did the Economist Really “Interview” an AI?

Perhaps they have a private definition of what an interview "is"…

Faced with a claim that an AI language tool had given an interview, I took the advice I gave readers yesterday, and followed the links. What a revelation. The Economist story was more dishonest than the examples that Siegel discussed in Scientific American.

Read More ›
Demographic Change

Can The Machine TELL If You Are Psychotic or Gay?

No, and the hype around what machine learning can do is enough to make old-fashioned tabloids sound dull and respectable

Media often co-operate with researchers’ inflated claims about machine learning’s powers of discovery. An ingenious “creative” approach to accuracy enables the misrepresentation, says data analyst Eric Siegel.

Read More ›
Photo by Michal Mrozek

So Is an AI Winter Really Coming This Time?

AI did surge past milestones during the 2010s but fell well short of the hype

Maybe both. AI will require more from us, not less, because how we choose to use these tools will make an increasingly stark difference between benefit and ruin.

Read More ›
Photo by Clem Onojeghuo

Can We Outsource Hiring Decisions to AI and Go for Coffee Now?

I would have fired any of my hiring managers who demonstrated characteristic AI traits immediately. So why do we tolerate it coming from a machine?

With historically low unemployment, employers are tempted to reduce costs and speed up the process using artificial intelligence (AI) systems. These systems might help but, for best results, let’s have a look at the problems they can’t solve and some that they might create.

Read More ›
Breast cancer histology: Lobular carcinoma in situ (LCIS) is seen in the lower left with invasive (infiltrating) lobular carcinoma in the upper right. Screening mammography can detect early tumors.

How AI Can Help Us Fight Cancer

Breast cancer is an excellent example of how AI can speed up early detection

AI catches things doctor miss, and doctors catch things AI misses. Using the AI to highlight what may be cancer tissue helps the radiologist focus on ambiguous situations, reducing the chance of missing early cancers.

Read More ›
GPS navigator in desert

AI Should Mean Thinking Smarter, Not Less

We should be all the more engaged when we use technology

Tim Harford points to the Sanchez tragedy to raise an important question: How do we know when a given technology is really helping us? And when we are taking too great a risk or paying too high a price?

Read More ›
Smart car, Autonomous self-driving mode vehicle on metro city road IoT concept.

Expert: We Won’t Have Self-Driving Cars For a Decade

Machine Learning rapidly moved self-driving cars from the lab to the roads but the underlying technology remains brittle

Myths are not inherently bad but the real world crushes them. One myth currently taking a beating is “self-driving cars are just around the corner.” Here’s why not. 

Read More ›
Students studying in college library

Machines Can’t Teach Us How To Learn

A recent study used computer simulations to test the “small mistakes” rule in human learning

Machine learning is not at all like human learning. For example, machine learning frequently requires millions of examples. Humans learn from a few examples.

Read More ›