Mind Matters Natural and Artificial Intelligence News and Analysis
samuele-errico-piccarini-197299-unsplash
Photo from driver's seat with light trails ahead
Photo by Samuele Errico Piccarini at Unsplash

The Real Future of Self-Driving Cars Is — Better Human Drivers!

Manufacturers are improving safety by incorporating warning systems developed for self-driving cars into conventional models
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

If you believe Elon Musk, Tesla will have 1000 robo-taxis on the road within a year. Calmer minds disagreed, but only on the timeline.

But what if both sides are wrong? What if the future of self-driving cars is no future at all and the best use of self-driving technology is to make humans better drivers?

Improving road safety is a key motivator for many self-driving proponents, including the NHTSA. Musk believes that “within a year,” humans will actually decrease self-driving cars’ safety if they interfere. But, given that Tesla has yet to resolve its own safety issues (unverified safety claims and verified explosions, for example), most analysts look for other perspectives as well.

Technology writer Christopher Mims thinks that improving human driving is the technology’s actual future:

When car makers put these incremental tech advances in human-driven cars, they pre-empt one of the fully self-driving car’s supposed advantages: safety. These new systems marry the best machines capabilities—360-degree sensing and millisecond reflexes—with the best of the human brain, such as our ability to come up with novel solutions to unique problems.

Christopher Mims, “Self-Driving Cars Have a Problem: Safer Human-Driven Ones” at Wall Street Journal

This human-plus-machine combination is proving more potent than the machine-only hype/promise. As Raj Rajkumar, professor of engineering at Carnegie Mellon told him,

We are sentient beings, and we have the ability to reason from first principles, from scratch if you will, while AI on the other hand is not conscious, and doesn’t even understand what it means that there’s a physical world out there.

Christopher Mims, “Self-Driving Cars Have a Problem: Safer Human-Driven Ones” at Wall Street Journal

Automakers have been embedding advanced technology in our cars for years to improve safety. When I learned to drive, for example, my instructor taught me to “pump the brakes” in an emergency to prevent a skid. Anti-lock brakes, which automatically prevent skids, controverts such advice. And newer technologies, many derived from the same technology used for self-driving cars—such as automatic emergency braking and lane departure warning — address more common situations.

These “advanced driver assistance systems” are improving so much that, as Mims says in the Journal. “For decades ahead, this fusion of human minds and machine reflexes will likely be the norm.”

Creating a fully autonomous vehicle is a very, very hard problem:

There’s growing concern among AI experts that it may be years, if not decades, before self-driving systems can reliably avoid accidents. As self-trained systems grapple with the chaos of the real world, experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectations, a correction sometimes called “AI winter.” That delay could have disastrous consequences for companies banking on self-driving technology, putting full autonomy out of reach for an entire generation.

Russell Brandom, “Self-driving cars are headed toward an AI roadblock” at The Verge

Industry pioneers have been overly optimistic about critical technologies:

For a long time, researchers thought they could improve generalization skills with the right algorithms, but recent research has shown that conventional deep learning is even worse at generalizing than we thought. One study found that conventional deep learning systems have a hard time even generalizing across different frames of a video, labeling the same polar bear as a baboon, mongoose, or weasel depending on minor shifts in the background. With each classification based on hundreds of factors in aggregate, even small changes to pictures can completely change the system’s judgment, something other researchers have taken advantage of in adversarial data sets.

Russell Brandom, “Self-driving cars are headed toward an AI roadblock” at The Verge

The problem might not have an affordable, politically acceptable solution. But that might not matter. As Mims concludes:

While self-driving technology might not be able to rescue the fortunes of some of tech’s biggest transportation visionaries, it does have the potential—here and now, implemented in human-driven cars—to reduce or even nearly eliminate road deaths.

Christopher Mims, “Self-Driving Cars Have a Problem: Safer Human-Driven Ones” at Wall Street Journal

Most of the driving public would probably be just as happy with that.


Also by Brendan Dixon on self-driving cars: News from the real world of self-driving taxis: not yet WayMo includes a human in all their “robotaxis,” just in case, because the vehicles (at last report) were still confounded by common conditions

Autopilot is not just another word for “asleep at the wheel” As a recent fatal accident in Florida shows, even sober, attentive drivers often put too much trust into Tesla’s Autopilot system, with disastrous results

and

If you think common sense is easy to acquire, try teaching it to a self-driving car


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

The Real Future of Self-Driving Cars Is — Better Human Drivers!