Mind Matters Natural and Artificial Intelligence News and Analysis
law book pen
Justice and law concept. Gavel on sounding block in hand's Male judge at a courtroom, working with document law books, report the case on table in modern office.

Section 230: What Is It and Why the Controversy?

Does Section 230 provide Big Tech too much power, or is it necessary for the moderation of misinformation and inappropriate content?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At the center of the controversy between free speech and the rights of private companies lies Section 230, the controversial U.S. code dating back to 1996.

Toward the end of his term in 2020, former President Donald Trump famously tweeted that Section 230 should be “completely terminated.” Sen. Josh Hawley, Sen. Lindsey Graham, and Rep. Tulsi Gabbard voiced their support, but by and large, the sentiment was met with fierce resistance.

Advocates for the reform (or complete repeal) of Section 230 argue that it shields Big Tech companies from accountability when they engage in politically-motivated censorship and content moderation. Supporters of Section 230 argue that it is essential to keep the internet free of misinformation and vile or obscene material.

Section 230

The Communications Decency Act was crafted in 1996. Internet platforms were just beginning to boom, which led to many legal questions needing answers, especially when it came to what should be allowed online and who would have the power to police such content. Specific concern focused on sexual and pornographic material online.

Senator Ron Wyden (D-OR) and Representative Chris Cox (R-MN) co-sponsored the Communications Decency Act to address the problems arising online.

Section 230(c) reads:

(1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) No provider or user of an interactive computer service shall be held liable on account of – 

a. any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

b. any action taken to enable or make available to information content providers or other the 

In other words, social media companies are privileged with editorial power without avenues for accountability that are placed on other editorial institutions. Theoretically, this protects social media companies from expensive lawsuits for removing truly terrible content.

It is true that this gives social media platforms the ability to remove “obscene, lewd, lascivious, filthy, excessively violent,” and “harassing” content that has no place being aired before the public. Such content would be opposed in public parks as well. But over the past year, Big Tech companies like Facebook and Twitter have wielded their editorial powers by engaging in political discrimination – censoring content based on political viewpoint rather than substantive harm. 

Before embarking further, however, let us turn to the beginning and evolution of Section 230.

The History of Section 230

Senator Josh Hawley (R-MO) has perhaps been the most outspoken critic of Section 230 in Congress. Earlier this year, Hawley published The Tyranny of Big Tech, a critical analysis of the power that Big Tech has accrued and wielded over the years. He argues for a reform of Section 230 that would allow for social media users to sue social media platforms when their content is unjustly censored. 

“Section 230 is the giant government subsidy on which Big Tech feeds and has built its empire,” writes Hawley. “It’s what Big Tech believes it cannot live without.” 

According to Hawley, the original intent of Section 230 was good. It was the reforms that followed quickly thereafter that turned over too much power to social media.

Congress’s principal concern in those days was to keep the internet from being overrun with pornography – and child predators, child exploitation, and smut in general. Enter the Communications Decency Act. The point of the law, as its lead sponsor in the Senate said at the time, was ‘to provide much-needed protection for children.’ It imposed liability on internet companies that displayed ‘obscene’ or ‘indecent’ material to minors. 

Josh Hawley, The Tyranny of Big Tech, page 127

A previous judicial ruling, however, had already made any internet company engaging in content censorship liable for any content posted to its platform. But with the rise of social media platforms, this standard became almost impossible. 

So, reforms quickly followed, providing social media companies with greater and greater immunities not afforded to other media companies.

“At Big Tech’s behest, the courts soon changed this entire framework. They dramatically narrowed what behavior counted as publishing, granting internet companies broad discretion to make editorial decisions, including altering content, without becoming liable for the content they altered. Then courts nullified the ‘good faith’ requirement in the law for taking down content. Section 230 had required that internet companies act in ‘good faith,’ evenhandedly, with justifiable and non-discriminatory reasons, when they removed content from their platforms. But now the courts said they could take down content without needing to show good faith in the least. Finally, courts eliminated the requirement that distributors refrain from displaying material they know or should know is illegal.”

Josh Hawley, The Tyranny of Big Tech, page 128

Hawley continues:

[W]hen all was said and done, when the dust had cleared from this strenuous bout of judicial renovation, Section 230 had been completely rewritten. Under the new and improved statute, tech companies could shape or edit content without liability, could take down content without any show of good faith or fair dealing, and could display content they knew to be illegal – and no one could challenge any of it in court. No other media concerns – no newspaper, no television network, no entertainment or film company – enjoyed this kind of immunity.

Josh Hawley, The Tyranny of Big Tech, page 128

In its original form, the Communications Decency Act placed the same level of responsibility on an online forum engaging in content moderation that would be placed on any newspaper. The privileges afforded the online companies came with responsibilities.

But Big Tech has now received a “sweetheart deal” offered to no other publisher or editor. 

What About Talk of the First Amendment?

The First Amendment of the U.S. Constitution states that: “Congress shall make no law…abridging the freedom of speech, or of the press…”

The First Amendment specifically binds government from interfering with free speech, not private companies.

So argued Judge Robert L. Hinkle when he struck down the Florida law banning social media companies from political discrimination. 

The First Amendment says “Congress” shall make no law abridging the freedom of speech or of the press. The Fourteenth Amendment extended this prohibition to state and local governments. The First Amendment does not restrict the rights of private entities not performing traditional, exclusive public functions.

Judge Robert L. Hinkle, NetChoice, LLC et al., v. Ashley Brooke Moody et al., June 30, 2021

Of course, this argument becomes difficult after White House press secretary Jen Psaki told reporters in July that the Biden administration is “flagging posts for Facebook that spread disinformation” in an effort to promote “accurate information and boost trusted content.” Such an admission blurs the lines between what are the rights of private companies and what is government coercion.

In their Wall Street Journal op-ed, Philip Hamburger and Clare Morell contend that social media companies like Facebook and Twitter have effectively become common carriers, which federal and state governments have the power to regulate. Where free speech used to occur in the public square, it now occurs on social media platforms, largely dominated by Facebook and Twitter.

A statute limiting the ability of a Big Tech company to express its own views would almost certainly be unconstitutional. What about a law limiting viewpoint discrimination where the companies serve as a publicly accessible conduit for the speech of others?

Philip Hamburger and Clare Morell, “The First Amendment Doesn’t Protect Big Tech’s Censorship” at the Wall Street Journal

They continue:

The problem with Section 230 is that it privileges the companies for serving the function of a common carrier without imposing the corresponding duties. This makes them uncommon carriers. They are so powerful as to avoid the burdens of common-carrier status while obtaining benefits for their role as conduits. State anti discrimination statutes would merely impose a small portion of the common-carrier duties that Big Tech has thus far evaded.

Common-carrier status is particularly consistent with the First Amendment because the companies aren’t merely private actors. Yes, they are private. And they might protest that their dominance is simply a product of their private enterprise and superior product. But they have had profound government support, which helped secure their dominance.

Philip Hamburger and Clare Morell, “The First Amendment Doesn’t Protect Big Tech’s Censorship” at the Wall Street Journal

The pro-reform argument goes that social media companies have grown so large as to require that users be protected from unfair censorship. As it stands now, ordinary Americans have no legal recourse if their content is unfairly censored or moderated. 

This is what Josh Hawley calls for in The Tyranny of Big Tech: “[T]reat the tech companies like the publishers they truly are, and let individuals sue them for acts of censorship or other breaches of good faith.”

This is also what Florida was seeking to do when Governor Ron DeSantis signed Senate Bill 7072 into law. It was halted by an emergency injunction hours before it was set to go into effect after NetChoice (representing the biggest names in social media) filed a lawsuit. 

The Fight Over Section 230 Continues

Still, there are fierce supporters of Section 230, opposed to any reform efforts. Sen. Wyden, one of the original sponsors of Section 230 back in 1996, writes at CNN:

Republican Congressman Chris Cox and I wrote Section 230 in 1996 to give up-and-coming tech companies a sword and a shield, and to foster free speech and innovation online. 

Essentially, 230 says that users, not the website that hosts their content, are the ones responsible for what they post, whether on Facebook or in the comments section of a news article. That’s what I call the shield. But it also gave companies a sword so that they can take down offensive content, lies and slime — the stuff that may be protected by the First Amendment but that most people do not want to experience online. And so they are free to take down white supremacist content or flag tweets that glorify violence (as Twitter did with President Trump’s recent tweet) without fear of being sued for bias or even of having their site shut down. Section 230 gives the executive branch no leeway to do either.

Ron Wyden, “I wrote this law to protect free speech. Now Trump wants to revoke it,” at CNN

In conclusion, fair concerns exist on both sides of this dialogue. No one wants an internet rife with pornography, violence, and general hatred. Some content moderation is needed in order to keep a safe and respectable platform. At the same time, accountability is needed so that content moderation does not become political viewpoint discrimination.

As it stands now, the battle is still contentious. Governor DeSantis plans to appeal Judge Hinkle’s emergency injunction, and other states like Texas are considering similar legislation to hold Big Tech accountable for its immense power. Both of these efforts are heavily opposed.

Perhaps ridding Section 230 entirely or keeping it entirely are both ineffective options. Perhaps an adjustment of Section 230 that fits the intended purpose of the larger Communications Decency Act would be a good start. Social media spaces have changed dramatically in the twenty-five years since the legislation was written and adopted – plenty of time for the evolution of a situation that now requires a rethinking of policy.


Caitlin Cory

Communications Coordinator, Discovery Institute
Caitlin Cory is the Communications Coordinator for Discovery Institute. She has previously written for Discovery on the topics of homelessness and mental illness, as well as on Big Tech and its impact on human freedom. Caitlin grew up in the Pacific Northwest, graduated from Liberty University in 2017 with her Bachelor's in Politics and Policy, and now lives in Maryland with her husband.

Section 230: What Is It and Why the Controversy?