On Biases in Tech: Facebook’s Evolving Safety Check

Safety Check—the crisis response tool that allows users to check in as 'safe' on Facebook—has some of its critics still wanting more.

Facebook’s “Safety Check” feature was activated last Friday for the sixth time in only six weeks. The tool was triggered in Orlando after the Pulse attack, during the explosion at Istanbul’s airport, after a truck bombing in Baghdad, for the Dallas police shooting, again in Nice, France, and most recently in Munich, Germany, after a deadly attack at a shopping mall. Since Facebook initiated Safety Check during the terrorist attack in Paris last year—the first time it was used for a non-natural disaster—use of the feature has skyrocketed. Safety Check has been activated at least 30 times since January alone, more than twice the number of activations in 2014 and 2015 combined.

This is partly because Safety Check can now be “community-generated.” Rather than relying on Facebook employees to manually activate the feature, last month’s update allows the platform’s algorithms to initiate the check if a critical mass of users are posting about a crisis in a specific location. While the new community-generated feature distances the company from the responsibility of choosing when to activate or not, Facebook engineers still determine the critical mass of users that will trigger the tool.

These changes “allow for greater community ownership of activating Safety Check,” Tom Trewinnard, a business development manager in collaborative and multilingual web applications at Meedan Labs, told Civicist on Wednesday.

When asked about the critical mass of users necessary to activate Facebook’s algorithms, Trewinnard said, “This is, of course, the next issue—who decides which communities can activate Safety Check and what will be the impact of implicit bias in that process? Facebook has moved from vetting crises to vetting communities who vet crises: that seems like a step in the right direction, though I can certainly imagine further delegation of that responsibility being possible and desirable.”

Safety Check has undergone multiple iterations since its launch in 2014. The tool was originally meant to connect Facebook users in the aftermath of natural disasters, “a simple and easy way to say you’re safe and check on others.” When Safety Check is activated, users in the affected area receive a notification asking whether or not they are safe and if they have friends also impacted by the conflict. Their responses are automatically shared within their networks.

When Facebook first activated the feature during last year’s shooting in Paris, many critics asked why they activated it then, but not for similar attacks in Lebanon, Turkey, or Nigeria that occurred in prior months. Safety Check’s absence outside of Paris “play[ed] into a broader narrative of a perceived lack of media attention and care for disasters and crises that happen outside of the US and Western Europe,” said Trewinnard. Critics pointed out that Facebook developers are ill-equipped to distinguish crises worthy of Safety Check from those that are not.

Safety CheckLaurenellen McCann, director of New America DC and one of Time’s “30 People Under 30 Changing the World,” echoed Trewinnard’s concerns on her blog. “When Facebook lets you mark yourself safe in Paris but not in Beirut, Facebook indicates to us how THEY think about the world and reveals the bias in the experience they bring to us. They reveal what they think about tragedy and—I think it’s fair to say—they reveal their biases about whose lives count.”

In 2015, Facebook’s vice president of growth, Alex Schultz, defended Facebook’s decision to activate the check in Paris and not for the other attacks. “During an ongoing crisis, like war or epidemic, Safety Check in its current form is not that useful for people: because there isn’t a clear start or end point and, unfortunately, it’s impossible to know when someone is truly ‘safe’” Schultz wrote.

Now, after recognizing the criticism from the past two years, the company appears to be deferring more to their users when it comes to crisis identification and response. “Over the past few months, we have improved the launch process to make it easier for our team to activate more frequently and faster, while testing ways to empower people to identify and elevate local crises as well,” a Facebook spokesperson told Fast Company in early July. By making Safety Check community-activated, Facebook is doing more to empower people on the ground, but also appeasing critics and shelving responsibility.

Wayan Vota, editor of ICTworks.org, an online community for people using emerging technologies for international development, was very critical of Facebook’s crisis identification process early on. “There is a very slippery moral slope in determining what is a disaster, especially from the safe confines of Silicon Valley. I don’t feel comfortable leaving it up to Facebook to decide which disasters are worthy of social media support or not,” he wrote on his organization’s site.

In an email to Civicist this week, Vota said he approved of Safety Check’s newest update, but wants to see coordination with local emergency response groups.

“The Red Cross considers community members themselves as the true first-responders to emergencies. Facebook’s new community-centric Safety Check update is a great step in that direction,” he wrote. “A next step could be integrating alerts to the Red Cross and other local responders when Facebook’s algorithms show a Safety Check launch.”

“I believe we should be building digital tools that allow users to self-identify what is an issue for them, and self-organize their response,” Vota said to Civicist.

What remains crucial, according to Tom Trewinnard, is Facebook’s commitment to the tool, especially if Safety Check becomes integrated into crisis response practices as Vota suggested. Trewinnard asked, “Is Facebook committed to maintaining availability of this feature for the long term? Is Facebook—a corporation subject to the whims of global markets and shareholders, accompanied by a corporate lack of transparency—the best owner of this kind of tool?”