Sign up for The Download — your daily dose of what’s up in emerging technology
Facebook announced in a blog post this week that it is taking steps to fight the youth suicide epidemic, including sharing data about how its users talk about suicide and self-harm and hiring a safety policy manager focusing on health and well-being.
Among the noteworthy changes in policy is Facebook’s decision to “no longer allow graphic cutting images.” The company, which owns Instagram, said it would also “[make] it harder to search for this type of content and [keep] it from being recommended in Explore.” That’s in addition to an announcement made in February that Instagram would start blurring images that depicted graphic self-harm. (Facebook did not return a request for comment.)
But while researchers studying the rise in suicide among young people applaud that Facebook is making an effort, they say it’s unclear how the company’s very public announcement will translate into tangible results.
The idea that social media is behind a worrisome spate of youth suicides in the last few years is increasingly well-documented. While not limited to social sites, the phenomenon of suicide contagion—where suicides reported in the media lead to an increase in suicides or suicide attempts—is particularly dangerous when mixed with digital platforms designed for viral sharing.
Jeanine Guidry, a professor in the School of Media and Culture at Virginia Commonwealth University, said that’s partially because research on the relationship between social media and suicide is nascent. In April, Guidry and her colleagues published a study in the Journal of Communication Healthcare on the nature of conversations about suicide on Instagram.
They found that posts that mentioned and graphically showed ideas about suicide elicited higher engagement than other posts. Public health organizations, meanwhile, were not using Instagram to help fight suicide—a potentially lost opportunity, particularly given how strong the platform’s popularity is among teens and young adults, says Dan Romer, the research director at the University of Pennsylvania’s Adolescent Communication Institute.
“At a minimum, those behaviors can seem popular or normative,” Romer says. “At a serious level, it could encourage someone who is thinking of doing those things to imitate them.”
Real Life. Real News. Real Voices
Help us tell more of the stories that matter Become a founding member
In May, he and his co-authors published a study that looked at how Instagram images of self-harm affected over 700 adults between 18 and 29. They found that within a month of being exposed to such images, 60% of people had thought about what it would be like to inflict self-harm.
The work showed something else, too: Only 20% of the people in the study intentionally searched for content related to self-harm on Instagram, which meant a large number of people who proved susceptible to these images were accidentally exposed to self-harm content. In those cases, Facebook’s pledge to make it harder to search for such posts would be ineffective.
It’s also unclear how Facebook intends to achieve its goals on not allowing graphic cutting images. “They are going to need AI to identify these images because there are too many of them,” Romer says. But that presents a challenge. “How do you train artificial intelligence to consider the varying degrees of what people think is problematic?,” he asks.
Another part of Facebook’s pledge—to engage its users with friends to help fight suicide contagion—shows some promise. In her research, Guidry showed that “you hear about social media and negative comments and cyberbullying [leading to suicide], but we found almost exclusively supportive messages— ‘I am here for you,’ ‘You’re not alone,’” she says. One recent study found that people readily used Instagram’s self-harm reporting tool when they learned about its existence and function.
So people care, and want to help. Romer, though, still isn’t sure how friends could come into play: “How do you know who a friend is? And how do you identify people [who could help]? There’s a privacy issue.”
Facebook’s plan to hire a safety expert was met with more questions than cheers. The position is wide in scope: it involves developing global policy to “thwart activity and individuals that undermine the safety, health and well-being of our global community,” and addressing (in addition to suicide and self-harm) “eating disorders, depression, anxiety, addiction, nutrition, healthy habits, vaccinations, and more.”
That, Romer says, is a “big, big, big job for whoever has that and that job will have a big, big challenge. I’ll be interested to see who gets it. The only person who would be qualified would be those who served on an injury prevention position at a major health organization like at the WHO or CDC, because that’s the level that they are looking for.”
“You could argue public relations is what they’re doing, that they’re sending the message they’re concerned,” Romer added. “Whether one person can organize efforts across all those things is a good question.”
Subscribe to the newsletter news
We hate SPAM and promise to keep your email address safe