Self-driving vehicles are just around the cornerOn the other side of a vast chasm…
A vast chasm? Yes, and one we seldom hear discussed.
The SAE (Society of Automotive Engineers) has defined five levels of self-driving cars:
In Level 1, the self-driving system takes over a single control, speed, or direction and the driver must be ready to take back control at any time.
In Level 2, the car takes full control in limited scenarios but the driver must be ready and available to take over at any moment.
In Level 3, the car is sufficiently self-driving under certain conditions and/or locations that the operator can look away, but the operator is expected to be able to return to driving within seconds when the car detects a situation it is not programmed to handle.
In Level 4, the automated system is expected to execute a safety fallback (i.e., slow to a stop) if the user does not respond to its request for assistance. If so, the driver can safely engage in an unrelated activity and just expect the car to do the right thing.
In Levels 1-4, the automated system is expected to respond correctly within what is called the “operational design domain” (ODD). The ODD is the list of situations and locations for which the self-driving system is designed. For instance, the ODD might be designed for interstate highway driving. The car is expected to perform safely without a user at the wheel during interstate driving but not in other locations. The ODD might be a set of conditions (clear day), a physical location (a carefully defined geographic region), or a set of road conditions (clearly marked lanes, etc.).
In Level-5, the car is expected to operate successfully unconditionally: The only input required is the destination; the car takes care of every other detail, regardless of the destination or the road conditions. The car is, in effect, its passenger’s chauffeur: “Home, James, and take the scenic route.”
Many people don’t recognize the gap between Level 4 and Level 5 automation. Whether or not the gap is bridgeable, it is an immense chasm.
Some may be thinking, “If one car has an ODD for highways and another car has an ODD for in-town driving, then if we just combine the programs, we will have both, right?” Actually, there is more to it than that. The software for the cars must include code for the following:
- handle all driving situations
- recognize which driving situation is encountered
- recognize when the driving situation changes
- manage the transition between driving situations
The first problem begins when we switch the requirement for autonomy from handling a few identified driving situations to handling all of them. Drivers behave differently in different cities. Once, when I was in New York City, for example, the drivers just spontaneously decided to organize themselves into five lanes on a four-lane road. My years in programming have taught me that the number of real-world situations that can arise is significantly greater than the number of situations I might forecast just by thinking about a problem.
In one of my first jobs, I was asked to program a shipping calculator. I was sent to the shipping department, where the clerk listed to me all the rules for shipping.
I asked cautiously, “Are these all the rules?”
The clerk assured me, “Yes. Every one!”
However, the day before the calculator went live, I was informed that the clerk knew only the rules for one area. Other areas had different rules but they were handled by different employees. It did not occur to anyone that I could not possibly have known that.
I’ve now come to expect this type of thing: The number of situations that produce errors and problems is at least double what even the experts can tell you it is.
Additionally, with Level 5 autonomy, there is no “opting out.” In every system I’ve worked with, situations arise where we punt past the technology. The reason is simple—having the technology to handle every possible situation is much more expensive than just handling a few special situations by hand. The fact is, predicting everything someone will want to do is difficult. Deciding which systems to automate (and which ones not to) is a key component of successful return on investment for automation. Attempts to over-automate tend to be self-destructive on a number of levels.
Both of these problems pertain to the first requirement above, “handle all driving situations.” However, larger problems come with requirements 2 through 4. As the number of situations we define as problems at the level of requirement 1 increases, the complexity of the problems at 2 through 4 also increases. That is, the greater the variety of driving situations we encounter, the harder it is to identify which situation we are in, and when it is changing.
The code needed to detect and handle the flow between the situations increases polynomially with the number of driving situations we must address. That is, if we have 2 driving situations, there are 2 possible transitions to account for. If we have 3 driving situations, there are 6 possible transitions. If we have 4 driving situations, there are 12 possible transitions.
Expressing it mathematically, for n driving situations, there are “n2 – n” transition possibilities. These types of numbers can mount up quickly. Therefore, every newly-identified driving scenario doesn’t just add one more scenario to code for in a linear fashion; it makes the project an order of magnitude more difficult.
Many cheerleaders have wrongly assumed that the progress from one level of automation to another should be a direct, linear process but it clearly isn’t. I’m not saying that this hurdle is insurmountable. Rather, the transition from Level 4 to Level 5 automation is multiple orders of magnitude more difficult than all the other levels combined. Its completion should not be taken as a foregone conclusion.
In the next installment, we will discuss another aspect of the higher levels of driving automation that is rarely considered—the moral difference between automation levels.
Jonathan Bartlett is the Research and Education Director of the Blyth Institute.
Also by Jonathan Bartlett: Guess what? You already own a self-driving car. Tech hype hits the stratosphere
Who built AI? You did, actually. Along with millions of others, you are providing free training data
“Artificial” artificial intelligence: What happens when AI needs a human I?