Life

Facebook Can Now Detect If A User Is At Risk For Suicide Just Through Their Posts

Carl Court/Getty Images News/Getty Images

People share everything from major life announcements to what they're eating for dinner on social media, but what should happen if someone posts a status about feeling suicidal? One social media giant is trying to prevent tragedy through new technology. On Nov. 27, Facebook announced that it artificial intelligence technology will be used to detect posts that indicate a user is at risk for suicide. The software will scan posts for concerning language and report them to trained moderators, who will call first responders or send helpful resources to the person in crisis. This isn't Facebook's first suicide prevention effort — according to a Facebook spokesperson who spoke to Bustle on background, they've been working on suicide prevention tools for over 10 years. Last year, the company created a reporting center for people concerned that their Facebook friends could be in danger. Earlier this year, Facebook added suicide prevention tools to Facebook Live after several disturbing cases of users live-streaming their suicide attempts.

This new initiative seems to be yet another effort to help people experiencing mental distress and for people who want to help their friends. The Facebook spokesperson tells Bustle that the social media network is in a unique position to connect people with friends and organizations who can provide support. Guy Rosen, VP of product management at Facebook, said in a press release that the technology is part of an "ongoing effort to help build a safe community on and off Facebook." According to Rosen, the AI will use pattern recognition to pick up on suicidal ideation in posts and videos. Facebook is also adding more moderators to its Community Reports team to review reports of suicidal posts. "This ensures we can get the right resources to people in distress and, where appropriate, we can more quickly alert first responders," Rosen said in the release.

According to Rosen, the tool was developed with help from crisis hotlines and mental health organizations like the National Suicide Prevention Lifeline.

John Draper, director of the National Suicide Prevention Lifeline, tells Bustle that research has shown AI can help identify people who might be in crisis. "We’ve been advising Facebook on how to create a more supportive environment on their social platform so that people who need help can get it faster, which ultimately, can save lives,” he says.

Facebook founder and CEO Mark Zuckerberg released a statement about the new initiative. "With all the fear about how AI may be harmful in the future, it's good to remind ourselves how AI is actually helping save people's lives today," the statement reads.

He wrote that suicide is a leading cause of death for young adults and that tools like this one can help prevent unnecessary deaths. "There's a lot more we can do to improve this further. Today, these AI tools mostly use pattern recognition to identify signals — like comments asking if someone is okay — and then quickly report them to our teams working 24/7 around the world to get people help within minutes," Zuckerberg said. "In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate."

Even though Facebook's technology could save lives — according to Rosen, first responders have conducted more than 100 wellness checks this month thanks to Facebook — some people are still worried about privacy. Facebook told TechCrunch that users can't opt out of having their posts scanned, and people began to voice concern. One Twitter user compared the new technology to something out of dystopian TV show Black Mirror. Additionally, the AI won't be used in the European Union because of a law that forbids personal data analysis without a user's permission.

I can understand the hesitancy. I've dealt with suicidal ideation before, but I've never asked for help publicly. How would I have felt if Facebook automatically had a moderator message me? It's hard to say, but I can understand how this could feel like an invasion of privacy. It's also worth noting that many police officers aren't properly trained to help people with mental illness, and authorities interacting with suicidal people can result in the person who needs help getting killed. There's no easy solution, but Facebook seems to be trying to save lives, and it feels like a step in the right direction.