Mind Matters Natural and Artificial Intelligence News and Analysis
comment like share
Social media concept.

New York Senate Tries To Shut Down Misinformation on Social Media

The introduction of the bill raises concerns about government's intrusion into the freedom of speech online
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In December, a bill was presented in the New York State Senate that would attempt to hold social media companies responsible for their role in circulating incitements to violence, incitements to self-harm, and misinformation. 

Introduced by Democratic/Working Families Party Senator Brad Hoylman of Manhattan, the legislation would empower the state’s Attorney General to bring an action against any social media company that allows for the circulation of content that incites violence or that “includes a false statement of fact or fraudulent medical theory…”.

No person, by conduct either unlawful in itself or unreasonable under all the circumstances, shall knowingly or recklessly create, maintain or contribute to a condition in New York state that endangers the safety or health of the public through the promotion of content, including through the use of algorithms or other automated systems that prioritize content by a method other than solely by time and date such content was created, the person knows or reasonably should know:

1. advocates for the use of force, is directed to inciting or producing imminent lawless action, and is likely to incite or produce such action;
2. advocates for self-harm, is directed to inciting or producing imminent self-harm, and is likely to incite or produce such action; or
3. includes a false statement of fact or fraudulent medical theory that is likely to endanger the safety or health of the public.

New York Senate Bill S7568

A violation would be categorized as a “public nuisance” and punishable by fines.

This bill specifically targets social media algorithms and their potential to promote dangerous content. As was reported by The Wall Street Journal last summer, Facebook’s algorithms promote the most popular content, which tends to be content that provokes “outrage and sensationalism.”

They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.

Keach Hagey and Jeff Horwitz, “Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead” at The Wall Street Journal

Hoylman’s bill would force social media companies to rework those algorithms, an effort that could gain wide support. What is more controversial is the bill’s inclusion of “a false statement of fact or fraudulent medical theory” as content that is dangerous and worthy of the intervention of the state.

The Free Speech Concern

“The fact is, it’s tantamount to removing an idea from the collective consciousness,” Bradley Martin told Mind Matters News. “That’s the insidious part of it.”

Bradley Martin is the Executive Director for the Near East Center for Strategic Studies. Last summer, he wrote about the dangers of social media censorship at The Washington Times. And that’s exactly what this bill would result in: the censorship of whatever is deemed “misinformation” by the technological and governmental powers-that-be.

“It’s part of a much larger trend that we’re seeing, whether it’s President Biden or on a more local level like this bill,” Martin explained. “It highlights the need for the protection of free speech, or maybe new legislation, like an Internet Bill of Rights, where people can speak their mind in politics and it’s protected.”

After U.S. surgeon general Vivek Murthy suggested censoring misinformation last week, National Review senior writer David Harsanyi wrote:

Government officials have no role in dictating speech. In fact, they have a duty not to. Murthy’s comments wouldn’t be as grating if it weren’t so obvious that the Biden administration has been pressuring Big Tech companies, who oversee huge swaths of our daily digital interactions, to limit speech. Last summer, Jen Psaki causally informed the press that the White House was “flagging problematic posts for Facebook that spread disinformation.”

…[A]llowing the state to participate in rooting out ‘misinformation’ sets a terrible precedent that can be easily abused. As we’ve learned during the coronavirus pandemic, health officials often pass on bad information – and treat as ‘misinformation’ theories such as the now plausible possibility that our own government helped hide and suppress news about Chinese origins of Covid. Alleged ‘misinformation’ can often turn out to be perfectly reasonable inquiry or theorizing. Certainly, the state should not be final adjudicator of fact.

David Harsanyi, “It’s Not Government’s Job to ‘Root Out’ Misinformation” at National Review

That is the concern over this New York State Senate bill: Not that social media companies have the power to choose what is posted on their own platforms, but that the state, through legislation such as the New York Senate Bill, would be directing what speech is allowed on those platforms and what speech is not. The state’s involvement in such a task would be a direct violation of the First Amendment.

The Solution to Misinformation

Martin explained that the best solution to misinformation is not censorship, but more information:

It’s my view that the solution to misinformation online is more information, which can only happen through a free exchange of ideas and debate and discussion until a sort of solution or concordance can emerge. It would entail a very mature conversation among all parties concerned. Whether that’s going to happen is another story.

This opinion is also shared by the Royal Society, Britain’s foremost scientific society, which recently came out with a statement condemning the use of censorship to combat misinformation.

Society benefits from honest and open discussions on the veracity of scientific claims. These discussions are an important part of the scientific process and should be protected. When these discussions risk causing harm to individuals or wider society, it is right to seek measures which can mitigate against this. This has often led to calls for online platforms to remove content and ban accounts. However, whilst this approach may be effective and essential for illegal content (eg hate speech, terrorist content, child sexual abuse material) there is little evidence to support the effectiveness of this approach for scientific misinformation…

The Royal Society, “The online information environment

Additionally:

…there is a risk that content removal may cause more harm than good by driving misinformation content (and people who may act upon it) towards harder-to-address corners of the internet.

The Royal Society, “The online information environment

Senator Brad Hoylman’s office did not return requests for comment.


You may also wish to read: Royal Society: Don’t censor misinformation; it makes things worse. While others demand crackdowns on “fake news,” the Society reminds us that the history of science is one of error correction. It’s a fact that much COVID news later thought to need correction was in fact purveyed by official sources, not blogs or Facebook or Twitter accounts. (Denyse O’Leary)


Caitlin Cory

Communications Coordinator, Discovery Institute
Caitlin Cory is the Communications Coordinator for Discovery Institute. She has previously written for Discovery on the topics of homelessness and mental illness, as well as on Big Tech and its impact on human freedom. Caitlin grew up in the Pacific Northwest, graduated from Liberty University in 2017 with her Bachelor's in Politics and Policy, and now lives in Maryland with her husband.

New York Senate Tries To Shut Down Misinformation on Social Media