Mind Matters Natural and Artificial Intelligence News and Analysis
Self-driving electric semi truck driving on highway. 3D rendering image.
Self-driving truck on highway Adobe Stock licensed

Star self-driving truck firm shuts; AI not safe enough soon enough

CEO Stefan Seltz-Axmacher is blunt about the cause: Machine learning “doesn’t live up to the hype”
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Lost in the noise of AI promises and prophecies are obscure signs of trouble. Recently, Starsky Robotics, an autonomous trucks startup, announced that it was closing up shop. Starsky was not just another startup overwhelmed by business realities. In 2019, it was named one of the world’s 100 most promising start-ups (CNBC) and one to watch by FreightWaves, a key trucking industry publication.

Neither was Starsky blinded by AI hype. The firm was designing trucks to be autonomous on the open road — far and away the easier problem — while relying on remote drivers for the critical first and last mile (that is, to and from the open road).

Stefan Seltz-Axmacher (right), co-founder and CEO of Starsky, wrote a long and thoughtful blog post on the issues that led to his firm’s demise:

Timing, more than anything else, is what I think is to blame for our unfortunate fate. Our approach, I still believe, was the right one but the space was too overwhelmed with the unmet promise of AI to focus on a practical solution. As those breakthroughs failed to appear, the downpour of investor interest became a drizzle. It also didn’t help that last year’s tech IPOs took a lot of energy out of the tech industry, and that trucking has been in a recession for 18 or so months.” explaining their decision. It is enlightening and puts the promise (and peril) of AI in perspective.

Stefan Seltz-Axmacher, “The End of Starsky Robotics” at Medium

Despite all their promise and the care that went into their development, Starsky’s trucks were not reliable. The reasons they failed, which Seltz-Axmacher distills to a single reason, deflates the hype we so often hear about AI self-driving vehicles:

There are too many problems with the AV industry to detail here: the professorial pace at which most teams work, the lack of tangible deployment milestones, the open secret that there isn’t a robotaxi business model, etc. The biggest, however, is that supervised machine learning doesn’t live up to the hype. It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool. [Emphasis added]

Stefan Seltz-Axmacher, “The End of Starsky Robotics” at Medium

Basically, the exponential improvement of Deep Learning/ Machine Learning AI systems that everyone expected has not occurred. Earlier rapid improvements have flattened into a long tail of difficult “edge cases.” After a few years of what seemed to be unstoppable progress, AI is, again, hitting the hard wall of the real world. As Seltz-Axmacher puts it: “Rather than seeing exponential improvements in the quality of AI performance (a la Moore’s Law), we’re instead seeing exponential increases in the cost to improve AI systems…”

Unfortunately for its own survival, the Starsky firm took safety seriously:

By definition building safety is building the unexceptional; you’re specifically trying to make a system which works without exception.

Safety engineering is the process of highly documenting your product so that you know exactly the conditions under which it will fail and the severity of those failures, and then measuring the frequency of those conditions such that you know how likely it is that your product will hurt people versus how many people you’ve decided are acceptable to hurt.

Doing that is really, really hard. So hard, in fact, that it’s more or less the only thing we did from September of 2017 until our unmanned run in June of 2019. We documented our system, built a safety backup system, and then repeatedly tested our system to failure, fixed those failures, and repeated.

Stefan Seltz-Axmacher, “The End of Starsky Robotics” at Medium

Starsky’s business model suffered partly because investors prefer fancy new features over proven safety. Then the real world stopped the music.

Despite the promises and marketing, no form of AI is anywhere near to replacing humans, even in limited-use scenarios. We say this again and again. And it’s not fun watching someone’s dream die. But we can all learn from it.


Further reading: Under what conditions could an “AI winter” occur? Some recent thoughts from Brendon Dixon at Mind Matters News:

So is an AI winter really coming this time?

Just a light frost? Or an AI winter? It’s nice to be right once in a while—check out the evidence for yourself

and

AI Winter Is Coming: Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding.


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Star self-driving truck firm shuts; AI not safe enough soon enough