Mind Matters Natural and Artificial Intelligence News and Analysis
customer-data-analytics-solutions-in-retail-using-ai-machine-1295558686-stockpack-adobestock
Customer data analytics solutions in retail using AI, machine learning, and big data to understand consumer behavior, personalize shopping experiences, and optimize product offerings and pricing
Image Credit: Sardar - Adobe Stock

Preemptive Pardon in “Big Beautiful Bill” Protects Big AI

But is Big AI the new “military-industrial complex”?
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

A major flashpoint in President Trump’s sweeping legislative package—the One Big Beautiful Bill (BBB) Act — has been the proposal for a federal moratorium on state-level regulation of artificial intelligence (AI). Originally proposed as a 10-year freeze, the measure has since been revised in response to internal Republican opposition.

There is a parallel in the COVID crisis where Big Pharma was exempt from any side effects of their vaccines. For COVID, the motivation was to enable rapid distribution to save lives. For the BBB, a motivation is protection of Big AI from responsibility for the consequences of their product at the state level. Big AI loves this.

Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with Cloud Storage Advantages Icon Visualisation.Image Credit: Gorodenkoff - Adobe Stock

Is Big AI the new “military-industrial complex”

During the Cold War, nations competed in an arms race to develop big beautiful weapons superior to their adversaries. Is such an act needed to assure the US wins the AI race? Dwight Eisenhower (1890-1969) warned against the power of the military-industrial complex. Today, the same warning could be sounded against Big AI.

Sam Altman, the OpenAI (ChatGPT) CEO, loves the protection he gets from the BBB.  He says President Trump “really understands the importance of leadership in this technology, the potential for economic transformation, the geopolitical importance.”

Original moratorium meets Republican opposition

Sen. Ted Cruz (R-TX) introduced the initial version of the AI provision, which would have barred states from regulating AI for a decade in exchange for access to a $500 million federal fund for AI infrastructure and deployment. Cruz framed the moratorium as essential to maintaining U.S. dominance in the global AI race: “The country that leads in AI innovation will shape the future.” Cruz called the pause “a victory for American entrepreneurs, Little Tech, small businesses, and states like Texas.” Nothing is said about any harm done by AI to others.

Backlash came from within Cruz’s own party, especially from Sen. Marsha Blackburn (R-TN), who feared that Tennessee’s recently enacted Ensuring Likeness, Voice, and Image Security (ELVIS) Act would be nullified. The law protects artists against unauthorized AI deepfakes of their voices and likenesses. “To ensure we do not decimate the progress states like Tennessee have made to stand in the gap,” Blackburn said, “I am pleased Chairman Cruz has agreed to exempt state laws that protect kids, creators, and other vulnerable individuals.”

Seventeen GOP state governors, including Arkansas Governor Sarah Huckabee Sanders, disagree with the BBB provisions that outlaw their AI oversight. They sent a letter to Senate Republicans asking them to strip the provision from the bill, saying it’s “the antithesis of what our Founders envisioned.”

A five-year compromise with key exemptions

After negotiations, Blackburn and Cruz struck a compromise that reduced the moratorium from ten to five years and introduced significant carveouts. Under the revised provision, states may still regulate AI in areas involving:

  • Unfair or deceptive acts or practices
  • Child online safety and child sexual abuse material
  • Rights of publicity
  • Protections for a person’s name, image, voice, or likeness
  • Common law doctrines that do not place an “undue or disproportionate burden” on AI systems

Unfortunately, the phrase “undue or disproportionate burden” risks watering down the entire compromise. Holding Big AI accountable may itself be ruled as too burdensome for Big AI.

What to do?

Big AI company lawyers already require AI to warn us not to trust its responses to our queries.

Nevertheless, common sense says AI companies need to be responsible for their product — just as if it was created directly and disseminated by a human. Many civil and criminal laws are already in place to cover such cases and should, at the discretion of the US or individual state, be strengthened or relaxed.

Let’s hope Big AI doesn’t get preemptively excused from consequences of the use of its product.


Preemptive Pardon in “Big Beautiful Bill” Protects Big AI