Facebook has previously used artificial intelligence to surface suicide reporting options to friends of a troubled user. The technology helps detect users who might be in need of help and connects them to organizations and services that provide help. Facebook has now said that it’s going to use its artificial intelligence to detect suicidal posts before they’re even reported.
Facebook’s “proactive detection” artificial intelligence technology is going to scan all posts on the world’s largest social network for patterns of suicidal thoughts.
When it feels the need to do so, the AI is going to send mental health resources to the user who’s flagged to be at risk. The information may also be sent to their friends and it may even establish contact with local first responders.
Relying completely on artificial intelligence to flag posts with suicidal thoughts to human moderators instead of waiting for user reports is going to help Facebook reduce the time to send help.
Facebook is going to use this technology all over the world to dig up posts with patterns of suicidal thoughts. The European Union remains an exception as the region’s privacy laws make it difficult to use this technology without getting on the bad side of regulators.
The AI is also going to be used to prioritize particularly risky or urgent user reports so that moderators address them before others. More moderators are also being dedicated to suicide prevention.