Some state legislators are ready to fight what they term as vaccine misinformation being posted on social media platforms.
A.7581 was introduced recently by Assemblywoman Patricia Fahy, D-Albany, with S.4512 introduced by Sen. Anna Kaplan, D-Carle Place, would define vaccine misinformation as a public expression, either verbally, in writing or through images, which intentionally is misleading about vaccines, vaccine side effects, vaccine components, vaccine efficacy, the relationship between illness and vaccines, or any other substantive information as it relates to vaccines.
“This legislation is part of a package aimed at addressing concerns about misinformation that is spread on social media networks. New Yorkers are all familiar with the expression if you ‘see something, say something,’ but unfortunately many virtual social media platforms make the process of ‘saying something’ confusing at best, and impossible at worst,” Fahy and Kaplan wrote in their legislative justification. “This legislation seeks to empower users of social media to keep virtual spaces safer for all by providing clear and consistent reporting mechanisms for instances of vaccine misinformation.”
Social media companies would be required to maintain a mechanism for individual users to report and make complaints of vaccine misinformation. These mechanisms shall be clearly accessible to users and must be easily accessed from both apps and websites.
Each social media network would be required to create a clear and concise policy which includes how a social media network will respond and address incidents of vaccine misinformation which have been reported. Such policy shall include a mechanism to provide a direct response to an individual who has reported possible vaccine misinformation and how the matter is being handled.
According to a March analysis by the Associated Press, more than a dozen Facebook pages and Instagram accounts, collectively boasting millions of followers, have made false claims about the COVID-19 vaccine or discouraged people from taking it. Some of the pages have existed for years.
Of more than 15 pages identified by NewsGuard, a technology company that analyzes the credibility of websites, roughly half remain active on Facebook, the AP found.
Facebook also banned ads that discourage vaccines and said it has added warning labels to more than 167 million pieces of additional COVID-19 content thanks to our network of fact-checking partners. (The Associated Press is one of Facebook’s fact-checking partners).
YouTube, which has generally avoided the same type scrutiny as its social media peers despite being a source of misinformation, said it has removed more than 30,000 videos since October, when it started banning false claims about COVID-19 vaccinations. Since February 2020, it has removed over 800,000 videos related to dangerous or misleading coronavirus information, said YouTube spokeswoman Elena Hernandez.
Twitter officials said in March the company will remove dangerous falsehoods about vaccines, much the same way it’s done for other COVID-related conspiracy theories and misinformation. But since April 2020, Twitter had removed a grand total of 8,400 tweets spreading COVID-related misinformation — a tiny fraction of the avalanche of pandemic-related falsehoods tweeted out daily by popular users with millions of followers, critics say.
“It’s a hard situation because we have let this go for so long,” said Jeanine Guidry, an assistant professor at Virginia Commonwealth University who studies social media and health information, told The Associated Press. “People using social media have really been able to share what they want for nearly a decade.”