Robert Mardini, the director general of the International Committee of the Red Cross (ICRC), says that the organization has its own trends analysis unit that uses software to monitor Twitter and other online sources in places where the organization operates. That can help keep workers safe in conflict zones, for example.
Of course, you can’t believe everything you read on Twitter. During a crisis, emergency responders using social media must figure out which posts are false or unreliable, and when to call out dangerous rumors. This is where Twitter’s own moderation capacity can be crucial, experts say, and an area for concern as the downsized company changes. In conflict zones, military campaigns sometimes include online operations that try to use the platform for weaponized falsehoods.
“Misinformation and disinformation can inflict harm on humanitarian organizations,” Mardini says. “When the ICRC or our Red Cross Red Crescent Movement partners face false rumors about our work or behavior, it can put our staff’s safety in jeopardy.”
In May, Twitter introduced a special moderation policy for Ukraine aimed at curbing misinformation about its conflict with Russia. Nathaniel Raymond, coleader of the Humanitarian Research Lab at Yale’s School of Public Health, says that though Twitter has not made any recent announcements about that policy, he and his team have seen evidence is being enforced less consistently since Musk took over as CEO and fired many staff working on moderation. “Without a doubt we are seeing more bots,” he says. “This is anecdotal, but it appears that that information space has regressed.” Musk’s takeover has also put into doubt Twitter’s ability to preserve evidence of potential war crimes posted to the platform. “Before we knew who to talk to get that evidence preserved,” Raymond says. “Now we don’t know what’s going to happen.”
Other emergency responders worry about the effects of Twitter’s new verification plan, which is on hold after some users who paid for a verification check mark used their new status to imitate major brands, including Coca-Cola and drug company Eli Lilly. Emergency responders and people on the front lines of a disaster both need to be able to determine quickly whether an account is the legitimate Twitter presence of an official organization, says R. Clayton Wukich, a professor at Cleveland State University who studies how local governments use social media. “They’re literally making life and death decisions,” he says.
WIRED asked Twitter whether the company’s special moderation policy for Ukraine remains in place, but did not receive a response as the company recently fired its communications team. A company blog post published Wednesday says that “none of our policies have changed” but also that the platform will rely more on automation to moderate abuse. Yet automated moderation systems are far from perfect and require constant upkeep from human workers to keep up with changes in problematic content over time.
Don’t expect emergency managers to leave Twitter immediately. They are, by nature, conservative, and unlikely to rip up their best practices overnight. FEMA’s public affairs director Jaclyn Rothenberg did not respond to questions about whether it is contemplating changing its approach to Twitter. She said only that “social media plays a crucial role in the field of emergency management for rapidly communicating during disasters and will continue to for our agency.” But on a practical level, people have been primed to expect emergency updates on Twitter and it could be dangerous for agencies to abandon the platform.
For people who work in emergency management, the upheaval at Twitter has raised larger questions about what role the internet should play in crisis response. If Twitter becomes unreliable, can any other service fill the same role as a source of distraction and entertainment, but also dependable information on an ongoing disaster?
“With the absence of this kind of public square, it’s not clear where public communication goes,” says Leysia Palen, a professor at University of Colorado Boulder who has studied crisis response. Twitter wasn’t perfect, and her research suggests the platform’s community has become less good at organically amplifying high quality information. “But it was better than having nothing at all, and I don’t know we can say that anymore,” she says.
Some emergency managers are making contingency plans. If Twitter becomes too toxic or spammy, they could turn their accounts into one-way communication tools, simply a way to hand out directions rather than gather information and quell worried people’s fears directly. Eventually, they could leave the platform altogether. “This is emergency management,” says Joseph Riser, a public information officer with Los Angeles’ Emergency Management Department. “We always have a plan B.”