New York Law Targets ‘Hateful Conduct’ on Social Media in Wake of Buffalo Massacre
All social media platforms that operate in New York State will now be required to spot and report “hateful conduct” as part of a 10-bill legislative package designed to tighten gun laws that was signed into law by Governor Hochul.
The bill on social media monitoring — Senate Bill S4511A, in legislative parlance — includes the most significant changes to New York gun laws in a generation. It seeks to limit the proliferation of online hate of the kind espoused by the Buffalo gunman, who shared a racist post online before the mass shooting in May.
“My legislation will empower social media users to keep virtual spaces safer for all by providing clear and consistent reporting mechanisms to flag hate speech,” the Democratic state senator who introduced the bill in Albany, Anna Kaplan, said after it was signed.
The bill received bipartisan support in both the senate and on the assembly floor, with 60 votes of support and 3 votes against the bill in the senate.
“The mass shooting in Buffalo was a tragic reminder of how powerful social media can be in spreading hateful, dangerous ideologies online,” Congressman Sean Maloney told the Sun. “It is important that we have better methods for addressing threats on social media.”
The bill was in part an attempt to address glitches in Facebook’s policy against hate speech, which at the time it was written last year only allowed users to report hate speech from its website, not its app. Indeed, a 2021 report by the Wall Street Journal declared Facebook’s application of its own community standards opaque and inconsistent. It now appears to allow users to report hate speech from its app.
Ms. Kaplan’s bill aims to resolve these discrepancies by requiring social media networks to establish an easily accessible procedure for users to report hateful posts. It seeks to apply the expression, “If you see something, say something,” to the digital world.
“This is a new concept bill,” Ms. Kaplan’s communication director, Sean Ross Collins-Sweeney, told the Sun. “I don’t know that anyone else has done it before.”
States have faced constitutional challenges in their attempts to moderate speech online, which frequently collide with First Amendment concerns. The Supreme Court, in a 5-4 vote, recently blocked a Texas law that banned social media companies from censoring users based on their political views.
In Florida, a state appellate court tossed a law that would punish social media platforms like Facebook, Twitter, and TikTok for allegedly suppressing conservative expression. The court held it was a governmental overreach to determine how social media companies conduct their work, even as it declared content moderation and curation protected under the First Amendment.
“Our bill is different,” Mr. Collins-Sweeney said in reference to the Texas and Florida laws. “It does not infringe on First Amendment rights because it does not say what can or cannot be on a platform.”
Although the bill includes a definition of hate speech, Mr. Collins-Sweeney explained that it is solely a guide for social media companies. “Ultimately, what happens with the speech is 100 percent the determination of the social platform,” he maintained.
Not everyone is so sanguine. A senior fellow at the Harvard Kennedy School’s Carr Center for Human Rights Policy, John Shattuck, speculated that the law could be subject to “pretty severe implications from a freedom of speech standpoint.”
Mr. Shattuck evoked a “balancing act” between protecting freedom of speech and undermining disinformation. He observed that “people are picking up things on social media that are full of disinformation and venom, often motivating them to be involved in and undertake mass shootings.”
Every social media network that operates in New York State will be required to abide by this law, which will ultimately be enforced by the state’s attorney general, Letitia James. However, its efficacy remains in question as social media platforms have historically struggled with speech moderation.
The executive director of the New York City-focused Surveillance Technology Oversight Project, Albert Fox Cahn, expressed skepticism. He argued that social media networks are invasive in their collection of user data and engage in “discriminatory and error-prone” content moderation, targeting users based on their political affiliation. Both Democrats and Republicans have raised concerns about government surveillance.
“We don’t see evidence that these sorts of bans on hate speech are easily scaled up by these mass social media sites,” Mr. Cahn said. “Social media platforms aren’t actually able to engage in the sort of pre-crime policing that politicians sometimes want.”
Mr. Cahn argues that this law, among other online speech regulations, is a well-intentioned but ultimately ineffective method of preventing gun violence. He speculated that “this may prove to be more of a publicity stunt than a projection.”
Others are looking elsewhere for a template. Mr. Shattuck recommends an algorithmic transparency requirement for networks similar to the Digital Services Act recently passed in the European Union. Such a mandate would make algorithms that determine the circulation of social media content transparent to users and would likely face fewer First Amendment issues than existing content moderation laws.
“That’s, I think, a better method than picking and choosing among particular types of information that might be prohibited or allowed,” Mr. Shattuck explained.
This type of mandate might also gather more bipartisan support. Americans broadly favor some methods of controlling disinformation on social media, but split by party. Democrats show greater approval of these efforts than Republicans.
“The problem of disinformation is one of the biggest problems in our society today, particularly involving democracy, the principles of democracy, and the way in which polarization has resulted from a tremendous amount of disinformation,” Mr. Shattuck said.