Mind Matters Natural and Artificial Intelligence News and Analysis
futuristic-autonomous-vehicle-cockpit-interior-of-unmanned-car-cockpit-with-digital-screens-created-with-generative-ai-stockpack-adobe-stock
Futuristic autonomous vehicle cockpit. Interior of unmanned car cockpit with digital screens. Created with Generative AI
Image licensed via Adobe Stock

Congress Boosts “Kill Switch” Technology to Control Drivers

Federal agency power poised to extend to your every move.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Next thing you know, you’ll be sitting in the driver’s seat, when Siri or Alexa informs you:

“Sorry, you may not drive. This vehicle is temporarily disabled. Please try again later.”

There is no override, no “lost password” feature to bypass the lockdown. It won’t matter where you were going, nor how urgently you needed to go. The AI-powered system decides you are not fit to drive.

Yet another dystopian fantasy? Hardly. Congress and the President enacted Public Law 117–58 (Nov. 15, 2021) requiring national rules to require passenger vehicles “to be equipped with advanced drunk and impaired driving prevention technology.” Say it that way, and who could speak against the idea?  After all, AI systems would be saving lives.

Continued funding approved by Congress over strong opposition in November 2023 means the “advanced drunk and impaired driving prevention technology” could appear in cars as soon as 2026. The AI-powered systems would “(i) passively monitor the performance of a driver of a motor vehicle to accurately identify whether that driver may be impaired; and (ii) prevent or limit motor vehicle operation if an impairment is detected.”

The Technology is Here

How might the driver approval system work? CorrActions has built “a software-only, motion-based, driver monitoring product” that uses “unconscious, uncontrollable, muscular submovements to monitor brain activity.” Input comes from the steering wheel or the driver’s smartphone.

CorrActions says the system “can detect a wide range of cognitive states,” such as “fatigue, inattention, anxiety, alcohol/ drugs, etc.” Reportedly, a driver’s interactions with a smartphone app give enough information to determine blood alcohol content (BAC) with 90% accuracy and never report a high BAC by mistake. 

The Solution Creates Problems

Now consider: A federal government agency uses AI to decide whether any of the over 350 million people in America may drive a car. The AI designers build driver approval systems. The drunk driver problem is solved. The federal agency and AI bask in heroes’ glory.

Big-picture questions and everyday tradeoffs must be faced, however. Consider first the daily real-world problems that the driver approval system (DAS) creates. Your car will not operate at all when you’ve had a couple of beers and:

  • Your wife’s suddenly in labor
  • Your child’s sick, injured, or bleeding
  • A tornado is spotted tearing up your town
  • You need to move your car away from a brush fire
  • You need to move your car out of your garage where a water heater leak is gushing

Regardless of any beer drinking, no system runs perfectly forever. Batteries deplete and wires fray, or the DAS can simply break down and deny any vehicle use. It won’t know or care why you want to drive.

In all cases, the DAS cannot evaluate the urgency of your situation. The DAS can’t weigh the risks vs. benefits of your driving. It doesn’t know or care that your trip is on a rarely used roadway, or that the medical clinic is only two miles away. It’s a federally-mandated robot, applying a national one-size-fits-all rule.

Claims, Lawsuits, Facts, Opinions, Truth, Justice

A DAS would be expected to record and save the data about each driver’s “cognitive state.” Naturally, police investigators and insurance companies would seek that data when considering who is “at fault” for an accident. Would the various authorities take the DAS records of a driver’s “impairment,” “fatigue,” “anxiety,” or “inattention” as unquestionable facts?

Not if fairness and justice still matter. Any device that supplies evidence in a legal case would need to be evaluated for validity and reliability. To use DAS reports about mental state, you would need solid designer evidence about the accuracy of the machine, how recently it was tested and calibrated, and whether internal and external harmful factors affected the reports. Could the DAS be affected by electrical or radio interference, jamming, data corruption, human hacking, or errors in software updates?

Digging deeper: If the DAS reports “impairment,” “anxiety” or “inattentiveness,” for example, are these answers only yes or no? Would the reports indicate any shades of gray, any degrees of severity?  Perhaps the reports would indicate emotional states on a scale of 0 to 9. Where does that scale come from? And who decides that anxiety=5 with inattentiveness=6 should prevent driving or stop the car on the road?

Nothing ensures that a DAS will be “correct” in any objective way. Just as polygraph machines are neither “lie detectors” nor “truth detectors,” a DAS will be limited to designers’ choices about (1) what data is relevant, (2) how to calculate results, and (3) how to present the computer’s conclusions about highly subjective human physiological and psychological factors. Putting a DAS into all vehicles gives rise either to absolute rule by AI or to lots of lawsuits seeking both truth and fair human outcomes in disputes.

DAS 2.0: Dangerous Servant, Eager Master      

Consider again the big picture: society’s mindset about how government makes decisions and imposes power upon the citizens. The federal DAS mandate normalizes the idea that a government agency using AI can and should “solve” problems of health, safety, and finances. An agency and its bureaucrats can “just follow orders” from AI systems they consider uber-experts. No more “human error” – AI becomes the super intellect. And no citizen can reasonably challenge that intellect’s decisions. Rule by experts with computers becomes the government model.

The DAS concept itself practically begs to expand government power. Today, the DAS is called a “passive monitoring” system that interacts with the car driver. Once the DAS is mandatory and widely installed, bright ideas will abound, e.g., limiting or blocking car use by people who owe taxes, who are accused of infractions, who are deemed troublemakers, or whose “carbon footprint” exceeds some number.  

Technology will evolve to connect the DAS with centralized control systems designed to advance the bureaucratic policy du jour. Just as Alexa today can listen to people’s conversations and word choices, so could DAS 2.0 detect language and thoughts discussed in private.

Mandating a government monitoring and control technology in private vehicles ushers in top-down control of individuals’ movement, thoughts, speech, and action. Lord Acton might well have quipped: Power corrupts, AI power corrupts exponentially.  


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Congress Boosts “Kill Switch” Technology to Control Drivers