Hoylman seeks to hold social media accountable for violent hate speech, vaccine misinfo

Originally published in The Village Sun

Brad Hoylman wants to turn Big Tech’s algorithms against it, in order to stop the spread of violent hate speech, anti-vaxx untruths and self-harm.

The state senator is seething, way beyond an angry emoji, over how Facebook and other platforms have been able to dodge consequences, all the while feeding the madness.

In the week before the one-year anniversary of the Jan. 6 riot at the U.S. Capitol, and as vaccine hesitancy helps fuel the Omicron variant, Hoylman announced new legislation (S.7568) to hold social media platforms accountable for “knowingly promoting disinformation, violent hate speech and other unlawful content that could harm others.”

Section 230 of the federal Communications Decency Act protects social media platforms from being treated as publishers or speakers of content shared by users on their apps and Web sites. However, Hoylman’s proposed legislation instead focuses on the “active choices” these tech companies make when implementing algorithms designed to promote the most controversial and harmful content — content that, according to Hoylman, “creates a general threat to public health and safety.”

“Social media algorithms are specially programmed to spread disinformation and hate speech at the expense of the public good,” Hoylman said. “The prioritization of this type of content has real-life costs to public health and safety. So when social media push anti-vaccine falsehoods and help domestic terrorists plan a riot at the U.S. Capitol, they must be held accountable. Our new legislation will force social media companies to be held accountable for the dangers they promote.”

For years, social media companies have claimed protection from legal consequences of their actions relating to content on their Web sites by citing Section 230. However, Hoylman argues, social media Web sites are no longer simply an “impassive” host for their users’ content.

On the contrary, many social media companies employ complex algorithms designed to put the most controversial and provocative content in front of users as much as possible, Hoylman and others charge. These algorithms drive engagement with their platforms, keep users hooked and increase profits. In other words, Hoylman argues, social media companies employing these algorithms “are active participants in the conversation.”

As the bill states, “No person…shall knowingly or recklessly create, maintain or contribute to a condition in New York State that endangers the safety or health of the public through the promotion of content, including through the use of algorithms or other automated systems that prioritize content by a method other than solely by [the] time and date such content was created… .”

This past October, Frances Haugen, a former Facebook employee, testified before United States senators that the tech behemoth knew of research proving that its product was harmful to teenagers but purposefully hid these findings from the public. The whistle blower also said that Facebook was willing to use hateful content to keep users glued to the site.

According to Hoylman, this kind of “social media amplification” has been linked to many societal ills, including vaccine disinformation, encouragement of self-harm, bullying and body-image issues among youth, and extremist radicalization leading to terrorist attacks like the Jan. 6 riot at the U.S. Capitol.

According to a press release by Hoylman about the bill, “When a Web site knowingly or recklessly promotes hateful or violent content, they create a threat to public health and safety. The conscious decision to elevate certain content is a separate, affirmative act from the mere hosting of information and therefore not contemplated by the protections of Section 230 of the Communications Decency Act.”

The state senator’s measure would allow the New York State attorney general, New York City Law Department and private citizens to hold social media companies and others accountable when they promote content that they know or “reasonably should know”: “advocates for the use of force, is directed to inciting or producing imminent lawless action, and is likely to produce such action; advocates for self-harm, is directed to inciting or producing imminent self-harm, and is likely to incite or produce such action; or includes a false statement of fact or fraudulent medical theory that is likely to endanger the safety or health of the public.

Co-sponsoring the bill with Hoylman in the state Senate is Anna Kaplan, who represents part of Long Island.