AI Should Mean Thinking Smarter, Not Less
We should be all the more engaged when we use technology- Share
-
-
arroba
Tim Harford, writer of the Undercover Economist column for the Financial Times, shared in a recent podcast the tragic story of Elisia Sanchez who unquestioningly followed her GPS navigation into Death Valley, California, in August 2009. While Elisia survived after being lost for five days, her six-year old son, Carlos, did not. Death Valley wilderness coordinator, Charlie Callagan, called the incident “death by GPS”: “People are renting vehicles with GPS and they have no idea how it works and they are willing to trust the GPS to lead them into the middle of nowhere.”
Why have things gone so wrong sometimes?:
The rangers at Death Valley national park in California call it “death by GPS”. It describes what happens when your GPS fails you, not by being wrong, exactly, but often by being too right. It does such a good job of computing the most direct route from point A to point B that it takes you down roads that barely exist, or were used at one time and abandoned, or are not suitable for your car, or that require local knowledge that would make you aware that making that turn is bad news.
Greg Milner, June 25, 2016, “Death by GPS: are satnavs changing our brains?” at The Guardian
Harford points to the tragedy to raise an important question: How do we know when a given technology is really helping us? And when we are taking too great a risk or paying too high a price?
He cites the 2007 financial crisis which was triggered when AIG, an insurance company that underwrites Wall Street banks, put too much faith in their predictive algorithms. He also looks at Japanese researchers’ unsettling finding that study participants who relied on GPS had much poorer recall for how they arrived at their destination than those who relied on other means.
Repeatedly, those who relied on technology without thinking did worse than those who kept their minds active.
Computer algorithms and AI can, in the words of a medical diagnoses coder in Florida, make you faster, “but it can also make you a little lazy” (Forbes, January 3, 2020) We’ve seen this as well with over-hyped driver assist technologies: “Thanks Autopilot: Cops stop Tesla whose driver appears asleep and drunk” (Ars Technica, 2019)
So, what does Harford suggest? We should be more engaged when we use technology, not less engaged. When using our GPS, we should know in what direction we are traveling. Let’s not be these tourists: “Sat-nav sends Swedish tourists to wrong end of Italy after Capri spelling mistake” (Telegraph, 2009). If AI is assisting you at work, double check the results. If your car has driver assist features, remember, you are the driver and pay attention to the big picture.
Technology works best when we use it and do not simply follow it. Blind trust in computers can, not just metaphorically, lead us straight to Death Valley.
If you enjoyed this piece, you may also enjoy these by Brendan Dixon:
Machines can’t teach us how to learn. A recent study used computer simulations to test the “small mistakes” rule in human learning.
Just a light frost? Or AI Winter? It’s nice to be right once in a while—check out the evidence for yourself
and
I am giving up cycling It’s just not worth it if a machine can beat me 😉