Facebook users around the globe began to notice something strange happening on their feeds Tuesday night. Links to legitimate news outlets and websites, including The Atlantic, USA Today, the Times of Israel, and BuzzFeed, among many others, were being taken down en masse for reportedly violating Facebook’s spam rules. The problem impacted many people’s ability to share news articles and information about the developing coronavirus pandemic. Canadian pundit and podcast host Andrew Lawton said he was shocked to find that Facebook had wiped his episode archive, and was barring him from sharing updates about Covid-19. “This is unreal,” he wrote in a since deleted tweet.
Facebook attributed the problem to a mundane bug in the platform’s automated spam filter, but some researchers and former Facebook employees worry it’s also a harbinger of what’s to come. With a global health crisis sweeping the globe, millions are confined to their homes, and social media platforms have become one of the most vital ways for people to share information and socialize with one another. But in order to protect the health of its staff and contractors, Facebook and other tech companies have also sent home their content moderators, who serve as their first line of defense against the horrors of the internet. Their work is often difficult, if not impossible, to do from home. Without their labor, the internet might become a less free and more frightening place.
“We will start to see the traces, which are so often hidden, of human intervention,” says Sarah T. Roberts, an information studies professor at UCLA and the author of Behind the Screen: Content Moderation in the Shadows of Social Media. “We’ll see what is typically unseen—that’s possible, for sure.”
After the 2016 US presidential election, Facebook significantly ramped up its moderation capabilities. By the end of 2018, it had more than 30,000 people working on safety and security, about half of which are content reviewers. Most of these moderators are contract workers, employed by firms like Accenture or Cognizant in offices around the world. They work to keep Facebook free of violence, child exploitation, spam, as well as other unseemly content. Their jobs can be stressful, if not outright traumatizing.
On Monday night, Facebook announced thousands of contract content moderators would be sent home “until further notice.” The workers would still be paid—although they wouldn’t receive the $1,000 bonus Facebook is giving to full-time staff. To fill the gap, Facebook is shifting more of the work to artificial intelligence, which CEO Mark Zuckerberg has been heralding as the future of content moderation for years. But some of the most sensitive content will be given to full-time staff, Zuckerberg told reporters on a call Wednesday, who will continue working at its offices.
Among Facebook users, Zuckerberg said, “I’m personally quite worried that the isolation from being at home could potentially lead to more depression or mental health issues.” To prepare for the potential onslaught, Facebook is ramping up the number of people working on moderating content about things like suicide and self-harm, he added. Another concern is the spread of misinformation—always an issue online, but particularly during a public health crisis. As part of its wider response to Covid-19, Facebook also announced it’s rolling out a Coronavirus Information Center to the newsfeed, where people can get updated information about the pandemic from authoritative sources.
As complaints over the spam glitch grew on Tuesday, those affected as well as some former Facebook employees wondered if it could be connected to the company’s recent workflow changes. “It looks like an anti-spam rule at FB is going haywire,” Facebook’s former security chief Alex Stamos said on Twitter. “Facebook sent home content moderators yesterday, who generally can’t [work from home] due to privacy commitments the company has made. We might be seeing the start of the [machine learning] going nuts with less human oversight.”
Facebook’s vice president of integrity Guy Rosen quickly swooped in to clarify: “We’re on this—this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce. We’re in the process of fixing and bringing all these posts back,” he wrote in a reply to Stamos on Twitter. (When asked for more detail as to what happened Tuesday evening, Facebook policy communications manager Andrew Pusateri directed WIRED to Rosen’s tweet.)
But researchers say problems like Tuesday night’s could become more common in the absence of a robust team of human moderators. YouTube and Twitter announced Monday that their contractors would be sent home as well, and that they too would be relying more heavily on automated flagging tools and AI-powered review systems. Leigh Ann Benicewicz, a spokesperson for Reddit, told WIRED on Tuesday that the company had “enacted mandatory work-from-home for all of its employees,” which also applies to contractors. She declined to elaborate about how the policy was impacting content moderation specifically. Twitch did not immediately return a request for comment.