Brandom at The Verge fears that self-driving cars might be hitting an “AI roadblock.”
On its face, full autonomy seems closer than ever. Waymo is already testing cars on limited-but-public roads in Arizona. Tesla and a host of other imitators already sell a limited form of Autopilot, counting on drivers to intervene if anything unexpected happens. There have been a few crashes, some deadly, but as long as the systems keep improving, the logic goes, we can’t be that far from not having to intervene at all.
“Not having to intervene at all”? One is reminded of the fellow in C. S. Lewis’s anecdote who, when he heard that a more modern stove would cut his fuel bill in half, went out and bought two of them. He reckoned that he would then have no fuel bills at all.
Alas, something in nature likes to approach zero without really arriving…
That said, there are practical problems with self-driving cars as well, and Brandom identifies the problem of “generalization.” Generalization enables an artificial system to respond to a situation that is somewhat like but not exactly like others. A residential street, for example, may feature bicycles but also bicycles built for two, bicycles pulling kiddy buggies, mopeds, motorized tricycles, electric scooters, motorized skateboards, rollerblades, parade floats headed for a storage shed and guys pushing a fridge on casters.
From a driving perspective, it is irrelevant whether these forms of transport are or should be “allowed” on the road. Who wants to be the person who must explain why a casualty isn’t his fault? If we are careful drivers, the ability to generalize spares us that horrid task. But what about self-driving cars?
For a long time, researchers thought they could improve generalization skills with the right algorithms, but recent research has shown that conventional deep learning is even worse at generalizing than we thought. One study found that conventional deep learning systems have a hard time even generalizing across different frames of a video, labeling the same polar bear as a baboon, mongoose, or weasel depending on minor shifts in the background. With each classification based on hundreds of factors in aggregate, even small changes to pictures can completely change the system’s judgment, something other researchers have taken advantage of in adversarial data sets.
It may not be possible to code in all possible situations. Many may not have been encountered by anyone before. But surely there is more to the matter than generalization. There is also the ability to see what is at stake:
The experimental data we have comes from public accident reports, each of which offers some unusual wrinkle. A fatal 2016 crash saw a Model S drive full speed into the rear portion of a white tractor trailer, confused by the high ride height of the trailer and bright reflection of the sun. In March, a self-driving Uber crash killed a woman pushing a bicycle, after she emerged from an unauthorized crosswalk. According to the NTSB report, Uber’s software misidentified the woman as an unknown object, then a vehicle, then finally as a bicycle, updating its projections each time. In a California crash, a Model X steered toward a barrier and sped up in the moments before impact, for reasons that remain unclear. More.
Software architect Brendan Dixon raises another issue: Driverless cars will require changing both the roads and how people interact with the vehicles. That second point may prove trickier than the first. Many people may resent or fight restrictions on themselves aimed at benefiting the owners of driverless cars.
Dixon thinks the AI vehicles will catch on for commercial uses if they are cheaper:
Self-driving vehicles will arrive. But — and here’s the trick — they will not arrive as the techno-religious think. The long-haul trucking industry will most certainly be taken over by self-driving vehicles. A friend mine, a senior AI researcher for Paul Allen, and I were envisioning how small changes to long roads (e.g., sensors in the roadway, dedicated lanes) would ensure the arrival of such trucks. And such trucks will save immense amounts of money. The challenges for them are snow and rough terrain, but they will be overcome (likely, as I suggested, with external assistance to the vehicles — imagine a new “job” to put on / take off chains as self-driving truck cross passes).
It’s difficult to see if this will then extend into the city. Urban taxis could be overtaken. In fact, without driveress cars, Uber’s business model very nearly falls apart. Their two top costs are a) cars and b) drivers, and not in that order. So, for them to succeed, they must eliminate the driver. Without doing so, Uber’s cost model becomes that of the taxi companies they desire to replace and their (so-called) disruption becomes acquiescence. Similar urban changes to those I noted for long-haul trucking — which cities, such as Seattle, are not far from — would seal the future for Uber (and Lyft).
One good thing about Dixon’s predictions is that they are specific, unlike the AI apocalypses that gather a crowd for science celebs. He raises practical questions: Is Uber a good part-time job in the long term? Is long-haul trucking a wise career choice? If governments earmark money for self-driving lanes, is that a future benefit to most citizens or only a few?
Predicting the AI winter is like predicting a stock market crash: It’s impossible to tell precisely when it will happen, but it’s almost certain that it will happen at some point. Much like before a stock market crash, there are signs of the impending collapse, but the narrative is so strong that it is very easy to ignore them — even if they are in plain sight. In my opinion, signs already show a huge decline in deep learning (and probably in AI in general as this term has been abused ad nauseam), yet hidden from the majority by an increasingly intense narrative. How “deep” will that winter be? I have no idea. What will come next? I have no idea. But I’m pretty positive it is coming, perhaps sooner rather than later. More.
Some, of course, will scoff that the fact that cars drove horses off the road points to an AI spring instead of Pieniewski’s winter. But that is an ambiguous example. Putting a nation on wheels for faster driving and greater prosperity is a different proposition from automating driving altogether. As The Economist puts it,
Cars changed the world in all sorts of unforeseen ways. They granted enormous personal freedom, but in return they imposed heavy costs. People working on autonomous vehicles generally see their main benefits as mitigating those costs, notably road accidents, pollution and congestion. More.
We will get a chance to find out how much personal freedom we must or will exchange in order to mitigate those costs.