Driving Technology Needs Public Scrutiny￼It is not good enough for safety-related data to be made available to regulators. They must be made available to the public at large
As more and more automation is added to automobiles, the need for public review and scrutiny becomes ever more clear. Unlike other technologies, cars are used on public roads at high velocities, so everyone has an interest in understanding the safety implications of decisions made by car manufacturers. As such, it is important that all safety-related data be made publicly available and subject to public scrutiny. It is not good enough for these things to be made available to regulators. They must be made available to the public at large.
To see why, let’s look at the history of Tesla claims about the safety of its Autopilot system. Note that the Autopilot system, despite the confusing name, is not the same thing as Tesla’s “Full Self-Driving.” Autopilot is primarily a safety and lane-keeping system for highway driving.
Musk told journalists in 2016 that if Autopilot was universally available, half a million people would be saved. In 2017, the National Highway Traffic Safety Administration (NHTSA) stated that the data showed a 40% decrease in crashes when Autopilot was enabled (official report here). However, the data this claim was based on was originally not made available to the public.
However, in 2018, Tesla started making some safety data available to the public. In a recent paper, Noah Goodall analyzed the statistics and showed that, when normalized to driving conditions, Autopilot actually performs worse than human drivers. The reason why Autopilot numbers look so good is because Autopilot is primarily enabled in locations and situations where crashes happen less frequently anyway, such as highway driving on a good road in good weather conditions by an experienced driver.
To illustrate the findings, let us say that someone invented a life preserver which was only usable in the shallow end of a pool by adults. This is a place where drownings almost never occur anyway. If someone were to compare the safety statistics of the life preserver compared to all pool drownings of any age and any depth, it would show significant improvement, even if the preserver itself did absolutely nothing. This was analyzed using a Bayesian approach based on known usage patterns of Autopilot.
So, Tesla has been stating for six years that Autopilot would save over half a million lives, and the government has been indicating that it reduces crashes by 40%, but when data is made publicly available, we can see that the benefits of Autopilot vanish.
More recently, however, NHTSA has been taking a more proactive stand. Recent versions of Tesla’s Full Self-Driving technology included options which intentionally broke traffic laws (rolling stops). While this blatant and overt disregard for public rules was caught and recalled, it makes one wonder if there are other legal corners being cut which are not so readily identifiable. The public has a right to know if a manufacturer is releasing software that intentionally (or unintentionally) causes its vehicles to disobey traffic laws or safety rules.
Tesla is not the only manufacturer keeping their safety information hidden from the public. Waymo is suing the California DMV to keep its safety information secret. They don’t want the public to know what their cars do in situations such as being outside of its tested boundaries, the constraints under which the self-driving technology works, and even descriptions of crashes involving its cars.
These data points, however, are essential for the public in deciding public policy, and the official agencies need oversight in how they are handling emerging technologies. This oversight cannot happen if the data is kept in the dark. If we are to make public decisions about public roads we need the data made available in order to make informed decisions.