Mind Matters Natural and Artificial Intelligence News and Analysis
front-view-capitol-dome-building-at-night-washington-dc-usa-illuminated-home-of-congress-and-capitol-hill-artificial-intelligence-concept-hologram-ai-machine-learning-neural-network-robotics-stockpack-adobe-stock
Front view, Capitol dome building at night, Washington DC, USA. Illuminated Home of Congress and Capitol Hill. Artificial Intelligence concept, hologram. AI, machine learning, neural network, robotics
Image licensed via Adobe Stock

The AI Bandwagon & Biden’s Executive Order

There are dangers and AI mitigation is needed. The question, though, is how .
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

On Oct 30, 2023, President Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. A follow-up  OMB Policy for management of AI was announced on March 28, 2024.

Swallowing AI hype, the AI directive kills a fly with grenade just in case there are other flies nearby. AI remains an exciting often mind-blowing technology, but hyped futuristic depictions of AI in The Terminator and The Matrix are unrealizable science fiction. Unlike humans,  AI will never understand what it is doing, be creative or experience qualia. AI is a tool. Like electricity or thermonuclear energy, it can be used for good or evil. Or can be the source of unforeseen accidents ranging from frayed house wiring to Chernobyl.

There are dangers and AI mitigation is needed. The question, though, is how and to what degree. Several well aimed rifle shots would work better than Biden’s proposed shot gun blast.    

There Are Dangers 

Many AI dangers needing attention are not new nor are constrained to AI. Securing systems against hacking is of paramount importance. However, cybersecurity has long been a challenge that demands ongoing vigilance. (That’s one of many reasons you periodically get forced Microsoft Windows updates.) The back-and-forth volleys between system protectors and hackers continues as an arms race.

Unexpected outcome can be dangerous. Early large language models (LLMs) slandered the innocent and offered vile advice to the young. A lot of this has been fixed in the LLM ChatGPT. But the number of outcomes increases exponentially as system complexity increases linearly. Recently I amusingly  showed ChatGPT4 cannot be instructed to not do something. Tell it to draw a picture with NO ELEPHANTS and the chances are you’ll see an elephant in the generated picture. This can be fixed, but there are other unexpected outcomes lurking in deep recesses of LLM’s trillions of parameters. The LLM developers will keep putting band aids on cuts as they are discovered.

A serious AI danger is the nefarious use of deep fakes. Fakes in photos date back to the 1920’s and the Cottingley Fairies where two young girls supposedly posed with fairies they found fluttering in the woods. Sir Arthur Conan Doyle, the creator of Sherlock Holmes, believed the faked photos were real. The AI faking of images today is exponentially more sophisticated, and, to date, there is no 100% foolproof method for detection. Deep fake videos of politicians and celebrities with matching deep fake voices are getting more realistic and more problematic. But solutions are being sought and, with proper standards, the deep fake problem will be successfully addressed.

One immediate stab at the problem is to extend NIL (name, image and likeness) rights given to college football players to every US citizen. Outlaw deep fakes of all individuals without their permission.  Resulting litigation dampers evil’s actions. Last month, US House of Representatives members Eshoo and Dunn introduced legislation to have all AI generated content be labeled with digital watermarks or metadata.

Spend and Grow

There are AI dangers, but Biden’s executive order grenade unnecessarily dictates more government and more spending on every possible conceivable area impacted by AI. OMB policy now requires all  federal agencies by May 27  to designate Chief AI Officers and AI Governance Boards.  And even though the vast majority advances in AI now come from the private sector (think Tesla, OpenAI, Midjourney, DeepMind, Bing, and Amazon) expanded government grants for AI research are called for.

Biden’s edict overlooks ongoing efforts to tackle deep fakes. Standards and monitoring methods are currently being developed by organizations such as the Coalition for Content Providence and Authenticity. With Adobe, the BBC, Google, Intel, Microsoft, Publicis Groupe, Sony and Truepic on their steering committee, methods are being developed that will authenticate documents using watermarking and chain of custody history. Organizations with similar charters are Content Authenticity Initiative  and  Project Origin. Once standards are developed, they can be adopted by existing organizations like NIST (the National Institute of Standards and Technology) who already deals with AI. Another organization is Underwriter’s Laboratory (UL) who is already doing certification in the areas of AI, Virtual Reality  and Cybersecurity.  Biden’s edict unnecessarily duplicates such efforts.  

Some Specifics

Here are a few specific troubling edicts in the administrative order. For example, “When at the airport, travelers will continue to have the ability to opt out from the use of TSA facial recognition.” Why? All the TSA does here is to compare a photo of your face with the picture on your ID. AI does better than a human. Your image taken at the airport does not need to be to be saved. Could this be a remnant of the push to not require ID when voting?

There is another requirement that AI decisions need to be reviewed by humans. This is generally a good call. AI makes mistakes. ChatGPT for example admits this when you log in. “ChatGPT can make mistakes. Consider checking important information.” But, as I argue in my book The Case For Killer Robots, there needs to be exceptions to this rule. In life-or-death situations required response time can be faster than humanly possible. Autonomous AI may be necessary.  If, for example, a swarm of a thousand hostile drones is attacking, there is not sufficient time for a human to aim and shoot down each individual drone. The task must be turned over to autonomous system. The Biden doctrine does allow appeal of their edict on final human approval of AI decisions. Let’s hope the DOD does so.

And lastly there is a call to end bias in AI. No one wants AI to spew racial slurs or laud xenophobia. But distinguishing between bias and legitimate social or political viewpoints can be challenging. What one person views as bias, such as opinions on climate change or Covid, may be considered convictions by others. Eliminating bias entirely from AI is a fantasy. AI void of bias is like water without wet.

Takeaways

The Biden AI edict throws AI at everything. It mentions (take a deep breath) global warming, detecting the use of opioids,  healthcare, financial services, education, decreasing drug costs, housing, law, transportation, supporting American workers, deconfliction of air traffic, assuring equal opportunity, curtailing justice discrimination and bias, advancing racial equity,  job displacement, labor standards, workplace equity, and health. AI will “expand agencies’ capacity to regulate, govern, and disburse benefits, and … cut costs and enhance the security of government systems.” Borrowing from the Pledge of Allegiance, we read AI will “advance civil rights, civil liberties, equity, and justice for all.”  I’m not an expert in most of these fields, and maybe AI will have impact. But it sounds like whoever wrote Biden’s executive order wrote down everything that popped into their minds.

This is history repeating itself. In the mid twentieth century a Bell Labs employee, Claude Shannon, wrote a landmark paper that today provides the ability for your cell phone to work. He ushered in the digital age. So remarkable was Shannon’s paper introducing information theory, that everyone and their brother tried to apply it to whatever they were doing. This troubled Shannon. In a paper titled “The Bandwagon” he expressed his concerns that apply today to AI:

 “Information theory has, in the last few years, become something of a scientific bandwagon.  … [It] has received an extraordinary amount of publicity in the popular as well as the scientific press. … Although this wave of popularity is certainly pleasant and exciting for those of us working in the field, it carries at the same time an element of danger… It will be all too easy for our somewhat artificial prosperity to collapse overnight when it is realized that the use of a few exciting words like information [theory] do not solve all our problems.”

To paraphrase Shannon, “use of a few exciting words like artificial intelligence and machine learning do not solve all our problems.” Biden’s executive order radiates a contrary view. He has jumped on the AI bandwagon that carries all the baggage of the media’s AI disinformation and hype. Like Shannon’s information theory, AI is not a cure-all for every problem. A more impactful executive order could be crafted with input from cutting edge domain experts who know what they are talking about.


The AI Bandwagon & Biden’s Executive Order