Automated Driving and Other Failures of AI
How would autonomous cars manage in an environment where eye contact with other drivers is important?Yesterday I posted a review here of philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. There’s a lot more I would like to say. Here are some additional notes, to which I will add in a couple of future posts.
Three of the failures of Big Tech that I listed earlier (Eugene Goostman, Tay, and the image analyzer that Google lobotomized so that it could no longer detect gorillas, even mistakenly) were obvious frauds and/or blunders. Goostman was a fraud out of the box. Tay a blunder that might be fixed in the sense that its racist language could be mitigated through some appropriate machine learning. And the Google image analyzer — well that was just pathetic: either retire the image analyzer entirely or fix it so that it doesn’t confuse humans with gorillas. Such failures suggest that an imminent machine takeover is unlikely. But there are more systemic failures, to which Larson draws our attention.
Out of Reach, and Getting More So
The failure of automated driving to reach level 5 (i.e., full automation with no need whatsoever of human guidance) remains not just out of reach, but seems to be getting more out of reach even as machine/deep learning gets more powerful and sophisticated. This itself suggests that the approach being taken is ill-starred. Larson makes the point well in his book: “Self-driving cars are an obvious case in point [the point being that, as Larson writes earlier, these “systems are idiots”]. It’s all well and good to talk up advances in visual object recognition until, somewhere out on the long tail of unanticipated consequences and therefore not included in the training data, your vehicle happily rams a passenger bus as it takes care to miss a pylon. (This happened.)” I’ve made the same point as well, here and here.
Automated Driving in Moldova?
Picking up on that last point, it would be interesting to see what fully automated driving would look like in a place like Moldova. A U.S. friend of mine who happened to visit the country was surprised at how Moldovan drivers managed to miss hitting each other despite a lack of clear signals and rules about when to take an opportunity and when to hold back. When he asked his Moldovan guide how the drivers managed to avoid accidents, the guide answered with two words: “eye contact.” Apparently, the drivers could see in their eyes who was willing to hold back and who was ready to move forward. Now that’s a happy prospect for fully automated driving. Perhaps we need “level 6” automation, at which AI systems learn to read the eyes of drivers to determine whether they are going to hold back or make that left turn into oncoming traffic.
This example suggests to me that AI is hopelessly behind the full range of human intellectual capabilities. It also suggests that we, in the cossetted and sanitized environments that we have constructed for ourselves in the U.S., have no clue of what capabilities AI actually needs to achieve to truly match what humans can do. The shortfall facing AI is extreme.
You may also wish to read:
Artificial intelligence: Unseating the inevitability narrative. William Dembski: World-class chess, Go, and Jeopardy-playing programs are impressive, but they prove nothing about whether computers can be made to achieve AGI. In The Myth of Artificial Intelligence, Erik Larson shows that neither science nor philosophy back up the idea of an AI superintelligence taking over.