Mind Matters Natural and Artificial Intelligence News and Analysis
concept of self-driving car
concept of a self-driving modern car , 3d illustration

Will Industry Pressure Loosen Self-Driving Car Tests?

Right now, the regulatory agency is under pressure to accept the industry’s “softball” testing suggestions
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Usually, it is Elon Musk pushing self-driving car technology too far, too fast. But, this time, it is WayMo, a division of Google’s parent company, Alphabet.

Waymo—in a public comment letter to the NHTSA, reported by David Shepardson at Reuters—recently “urged the National Highway Traffic Safety Administration (NHTSA) to ‘promptly’ remove regulatory barriers for cars without steering wheels and brake pedals.”

Yet Waymo’s current vehicles fail to even give a smooth ride, let alone handle obscure situations (the more commonly raised concern):

The Information analyzed internal feedback about the performance of Alphabet’s Waymo self-driving taxis on public streets, covering more than 10,500 rides in July and part of August. The reports from riders using the service in suburban Phoenix and in the San Francisco Bay Area, portions of which we describe below, provide an unprecedented view into the most high-profile autonomous vehicle development effort in the world, underscoring the extreme difficulty of making automated taxis mainstream.

Amir Efrati, “Waymo Riders Describe Experiences on the Road” at The Information (August 26, 2019 )

If only this were just more Silicon Valley silliness. But no, according to Reuters, even the often-staid GM commented that:

“it is imperative that NHTSA continue to drive this critical dialogue with a sense of urgency so that the necessary regulatory evolution keeps pace with advancing technology.”

Waymo is cagey as to why removing the steering wheel and other human-operated controls is urgent. But Lyft, which also submitted comments to NHTSA, argues that the presence of such controls “impede the development and deployment” of autonomous vehicles. How their presence impedes these goals, the firm does not clarify.

To be fair, the NHTSA rules naturally assume a human driver and, not surprisingly, include one in tests of control devices. Waymo and Lyft, among others, are correct in assessing that such tests may not apply to Level 4 or Level 5 vehicles, where human drivers are optional.

The bottom line is, NHTSA needs to amend its tests to accommodate self-driving vehicles. But then NHTSA must define tests that match or exceed those a human-driven vehicle passes.

Here is an example of the issues that arise: NHTSA often orders test vehicles of the exact model intended for sale to the general public. Because many of the vehicles Waymo, Lyft, and others propose will not be sold to the public, the industry offers several suggestions for new test procedures. Waymo encourages an after-the-fact test (that is, testing the vehicles after they are sold and delivered but before they are put into operation) while Lyft wants the NHTSA to trust simulations and (of all things) the technical documentation itself: “As such, Lyft supports FMVSS compliance verification that primarily relies on technical documentation for system design verification.”

I agree that NHTSA needs to adapt its test processes to self-driving vehicles. But trusting technical documentation or only testing vehicles that are already sold is grossly insufficient. Technical documentation tells us what the engineers think should happen, not what will happen. And testing sold vehicles creates an incentive to skimp on tests.

Skimp on tests? Consider: When NHTSA is testing a vehicle no consumer has actually bought, no one but the developer (and the investors) have a financial stake. NHTSA has little to lose by simply doing its job of reporting flaws and malfunctions. But when many unsuspecting buyers are caught up in a problem, it becomes, by comparison, politically messy. Just ignoring a possible problem becomes a more attractive approach for an agency.

Fortunately, more sensible voices—notably, those outside of the Silicon Valley bubble—are offering constructive criticism in their comments:

As a standalone approach, the Technical Documentation approach introduces too much risk since there is no validation a vehicle works as designed or equipment is not faulty. Additionally, the Surrogate Vehicle approach introduces too much risk because the vehicle being tested is not completely representative of the vehicle manufactured for consumer use.

– Comments from Pennsylvania Department of Transportation

As NHTSA noted in the advance notice of proposed rulemaking (ANPRM), there are no barriers in existing federal motor vehicle safety standards (FMVSSs) which address ‘the self-driving capability of an ADS’ or ‘prohibit inclusion of ADS components on a vehicle,’ and current FMVSSs do not pose ‘testing or certification challenges for vehicles with ADSs so long as the vehicles have means of manual control and conventional seating, and otherwise meet the performance requirements of the FMVSSs.’”

– Comments from Consumer Reports

Further, we continue to find specious the assertion that current Federal Motor Vehicle Safety Standards (FMVSS) obstruct the future development and testing of ADS- DV technology… Until manufacturers have validated ADS-DV performance in all reasonable operating design domains and demonstrated continued safe operation of ADS- DVs lacking human control, the rationale for rewriting the rules to allow such vehicles on the road remains unexplained.

– Commentsfrom the Center for Auto Safety

The message from Waymo and other self-driving car proponents is that self-driving cars are coming soon and that they will make us safer. But we have little evidence for either claim. We have, on the other hand, plenty of evidence that companies may aggressively pursue their own ends while risking human life.

NHTSA must not be cowed by science fiction flash-bang promises of self-driving cars. They should—must—design rigorous tests that ensure self-driving cars are at least as safe as human drivers. To fail is to abdicate their responsibility to the very companies they are meant to monitor as well as to the public.


Also by Brendan Dixon on safety issues around self-driving cars:

Are self-driving cars really safer? A former Uber executive says no. Before we throw away the Driver’s Handbook… Current claims that self-driving cars are safer are hype, not measurement. Meanwhile, Congress is expected to push for legislation next month to pave the way for widespread use of self-driving vehicles without a consensus on safety standards.

Does a Western bias affect self-driving cars? How a driver is expected to act varies by culture. Self-driving cars (autonomous vehicles) will need to adapt to different rules and we will, very likely, need to change those rules to make the vehicles work.

Should Tesla’s Autopilot feature be illegal? A recent study from the United Kingdom on driver competence suggests that maybe it should.

and

Autopilot is not just another word for “asleep at the wheel” As a recent fatal accident in Florida shows, even sober, attentive drivers often put too much trust into Tesla’s Autopilot system, with disastrous results.


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Will Industry Pressure Loosen Self-Driving Car Tests?