Facebook is doing its part to help users who might be struggling with self-harm. The company today announced that it’s adding real-time self-harm prevention tools to Facebook Live and Facebook Messenger. The company also revealed that it’s testing new artificial intelligence technology which is going to help spot suicidal users by detecting whether a post they’ve shared “is very likely to include thoughts of suicide.” If such a post is flagged by the AI, it will then be checked by Facebook’s Community Operations teams and just might end up saving someone’s life.
Facebook did point out today that this is a serious matter given that a suicide takes place every 40 seconds and happens to be the second leading cause of death for young people. The company’s new AI will also let users report their friends’ problematic posts.
The AI will use pattern recognition to make suicide or self-injury reporting options more prominent so that user’s friends can report those options easily and take action quickly. The company does mention that the AI detection and reporting options is just a test at this point in time that’s being run in the United States so it remains to be seen if and when Facebook rolls it out globally.
Facebook has developed new Messenger tools in partnership with the National Eating Disorder Association, the National Suicide Prevention Lifeline, the Crisis Text Line and other similar organizations which will let at-risk users or their concerned friends contact these groups over chat directly from the groups’ Facebook page or through the suicide prevention tools that Facebook offers.
The company is only testing this feature right now but does say that it will be expanded in the coming months.
Filed in Facebook. Source: newsroom.fb
. Read more about