Facebook has developed algorithms that spot warning signs in users’ posts and the comments their friends leave in response. is using a combination of pattern recognition, live chat support from crisis support organizations and other tools to prevent suicide, with a focus on its Live service.
There is one death by suicide every 40 seconds and over 800,000 people kill themselves every year, according to the World Health Organization. “Facebook is in a unique position—through friendships on the site—to help connect a person in distress with people who can support them,” the company said Wednesday.
The tool is being tested only in the US at present.
It marks the first use of AI technology to review messages on the network since founder Mark Zuckerberg announced last month that he also hoped to use algorithms to identify posts by terrorists, among other concerning content.
Facebook also announced new ways to tackle suicidal behaviour on its Facebook Live broadcast tool and has partnered with several US mental health organisations to let vulnerable users contact them via its Messenger platform.
“Suicide prevention is one way we’re working to build a safer community on Facebook. With the help of our partners and people’s friends and family members on Facebook, we’re hopeful we can support more people over time” reported the company’s blog.