California: The New “Deepfakes” Ban Violates the First Amendment!
Outrage over the AI-generated imitations of Kamala Harris’s voice distracts from the issues around the state’s power grab over mediaArticles following the AP’s September 17, 2024 report headlined “California laws cracking down on election deepfakes by AI face legal challenges” topped search engine results. Most of them jumped on the artificial intelligence (AI) “deepfake” angle, scarcely touching upon the First Amendment freedom of expression issues it raises.
Like a deceptive play in football, most of the coverage “faked” toward the AI issue while the California legislative “quarterback” threw a long pass to complete a blatant violation of fundamental rights.
The real story here scarcely involves AI deepfake technology at all.
California’s law, AB 2839 (Cal. Elec. Code § 20012), is titled “Elections: deceptive media in advertisements.” AB 2839 prohibits certain kinds of speech about government office candidates in an election cycle.
On its face the law is suspicious
The First and Fourteenth Amendments restrain governments from passing laws that abridge citizens’ rights to freedom of speech and press. The U.S. Supreme Court has long held:
[A]s a general matter, the First Amendment means that government has no power to restrict expression because of its message, its ideas, its subject matter, or its content. … As a result, the Constitution demands that content-based restrictions on speech be presumed invalid . . . and that the Government bear the burden of showing their constitutionality.
The First Amendment especially protects political speech during election cycles whether the government likes it or not. The Supreme Court in 2010 said so plainly:
Speech is an essential mechanism of democracy, for it is the means to hold officials accountable to the people. … The right of citizens to inquire, to hear, to speak, and to use information to reach consensus is a precondition to enlightened self-government and a necessary means to protect it. The First Amendment has its fullest and most urgent application to speech uttered during a campaign for political office.
The federal Ninth Circuit in 2020 restated the rule bluntly: “Content-based regulations—those that target speech based on its topic, idea, or message—are presumptively invalid.”
California’s rush to block election parody videos
Christopher Kohls creates fake political ads mimicking Democratic Party politicians but inserting self-mocking and exaggerated statements. In July 2024, Kohls uploaded to X (formerly Twitter) a video of presidential nominee Kamala Harris’s pictures and footage, but in certain places dubbed over Ms. Harris’s real voice with a computer-generated voice that sounds just like her. The video was entitled clearly a “PARODY.” The computer Kamala voice says things that make her seem ridiculous. Kohls used generative AI tools to produce the voiceover. That’s the only AI involvement in the case.
Here’s the video.
California Governor Gavin Newsom personally engaged himself by tweeting his reply to Kohls’ parody video: “Manipulating a voice in an ‘ad’ like this one should be illegal. I’ll be signing a bill in a matter of weeks to make sure it is.” An existing bill was urgently finalized by the legislature and enacted to take effect immediately on September 17. Newsom’s press release asserted the law would “remove deceptive content from large online platforms.”
Kohls’ subsequent lawsuit explains that AB 2839 extends beyond mere “advertising,” but “to all ‘Election communication,’ which covers all speech concerning a ‘candidate for office’— not just advertising or campaign-generated works—including speech distributed ‘through the internet.’”
The lawsuit probes how AB 2839 purportedly ensures “California elections are free and fair” by “prevent[ing] the use of deepfakes and disinformation meant to prevent voters from voting and to deceive voters based on fraudulent content … for a limited time before and after elections.” The supposed “limited time” amounts to many months during election years- beginning 120 days before “any election in California” and extending to 60 days afterward. The elections include those for every “candidate for any federal, state, or local elected office in California,” including “any person running for the office of [U.S. President or Vice President] … on a [California] ballot.”
Expressly in writing AB 2839 aims not to prevent just deepfakes generally – it also aims to prevent “disinformation,” that code word for any kind of “words and information I don’t like.” Sorry, but American electioneering too often includes insulting and offensive charges, claims, and language – and most of this is Constitutionally protected expression.
The Supreme Court in 1988 unequivocally declared that ridicule and satire are protected political expression: “Despite their sometimes caustic nature, from the early cartoon portraying George Washington as an ass down to the present day, graphic depictions and satirical cartoons have played a prominent role in public and political debate.” A 2023 federal Fifth Circuit decision followed that precedent to confirm: “The First Amendment’s protections apply to jokes, parodies, satire, and the like, whether clever or in poor taste.”
AB 2839’s Own Text Proves It Isn’t About AI, Deepfakes, or Even Deception
AB 2839 says it aims to prevent using deepfakes and disinformation that would “deceive voters based upon fraudulent content.” Except it doesn’t do that. AB 2839 expressly allows candidates to use deepfakes to falsely portray themselves. See Cal. Elec. Code § 20012(b)(2). A deceptive deepfake on social media is a deceptive deepfake – regardless of who posts it. But this law allows a candidate to, say, distribute a video of himself wearing a general’s uniform and conducting surrender discussions with deepfake images of terrorist leaders to liberate hostages – when the candidate did no such thing.
The Supreme Court decisions wouldn’t tolerate the double standard. The Court declared the First Amendment is “deeply skeptical of laws that ‘distinguish among different speakers, allowing speech by some but not others.’” Such laws “run the risk that ‘the State has left unburdened those speakers whose messages are in accord with its own views.’”
The Sharp Teeth of AB 2839’s Lopsided Enforcement Methods
AB 2839 authorizes not only the State but “[any] recipient of materially deceptive content … [concerning a] candidate or committee participating in the election, or elections official.” That means any activist, pundit, or nutcase can sue a video maker to seek an injunction “prohibiting the distribution of the materially deceptive content,” and to receive “general or special [monetary] damages,” plus “reasonable attorney’s fees and costs.”
If a video maker like Kohls defeats such a lawsuit, however, the video maker is not entitled to damages or attorney’s fees from the losing attacker. For attackers, AB 2839 presents the “heads I win, tails you lose” power to conduct lawfare against their political opponents based solely upon their opponents’ speech and publications.
The courts will be busy with such lawsuits and running up attorneys’ fees in the process because terms like “materially deceptive content” require expensive litigation to try to define through lawsuits and appeals. The lawfare costs to people giving political opinions via online videos will chill political speech – and all because of California’s laws. AB 2839 targets political speech, the most protected form of expression under the First Amendment.
Seven Meritorious Challenges to AB 2839
As of this writing, I have found no “mainstream” Internet news source that shared with readers what Kohls’ legal challenges contain. In a nutshell, the points are:
(1) The State has no constitutional interest in preventing the Kohls videos or any AI-generated content about elections. (The sole exception is to prevent outright political corruption.)
(2) AB 2839 is unconstitutional because it discriminates against political speech based on content.
(3) AB 2839’s text is overbroad and vague, making it invalid for regulating core political speech.
(4) AB 2839 demands certain labeling requirements on AI-generated content that would visually blot out the video or spoken content while unlawfully mandating specific disclaimers.
(5) AB 2839 discriminates against some speakers based upon content while allowing other candidates to use deepfakes to portray themselves doing or saying things they never did.
(6) AB 2839 makes strategic lawsuits inexpensive to launch by activists against political speakers, imposing all the costs of attorneys and fees upon the speakers (lose or win).
(7) When a speaker’s online video is challenged in court, the online platforms’ terms of service would likely immediately ban the video regardless of how the lawsuit eventually ends – thereby censoring speech based solely upon an accusation under an unconstitutional law.
Kohls’ lawyers, Messrs. Frank and Schulman, have fully supported every challenge with facts and controlling law. Accessible online are their judicial complaint and motion for a preliminary injunction filed in the federal court.
The Kohls “Deepfakes” Did Not Need AI at All
Missed by most reports to date is this: Generative AI was not needed at all to produce Kohls’ parody videos. The only AI fakery was to generate Ms. Harris’ voice. But a talented voiceover actress could do that.
Overemphasizing the AI aspect of the situation cleverly draws attention away from the egregious First Amendment violations baked into California’s election laws.
Full disclosure: I continue to advocate legal prohibition of human impersonation deepfake video technology, regardless of its intended good uses by some people. No one I know of wants to ban human voiceover satires and parodies, however. Under cover of “controlling deepfake deception,” AB 2839 empowers government control of speech and publication affecting the mainspring of a functioning democratic republic. Based upon Supreme Court and other longstanding precedents, AB 2839 should be declared unconstitutional on its face.