Crisis

Facebook’s AI suicide prevention tool is a ‘black box.’ That should worry you.

Picture: vicky leta / mashable

For people who have committed their lives to preventing suicide media posts can be a valuable dataset that contains hints about what people say and do before they attempt suicide.    

In the past couple of decades, researchers have built algorithms to find out which words and emoji are connected with suicidal ideas. They’ve even used social media posts to retrospectively predict the suicide deaths of certain Facebook users.  

Now Facebook itself has rolled out fresh artificial intelligence that can proactively identify heightened suicide threat and alert a group of individual reviewers that are trained to reach out into some user considering fatal self-harm.  

An example of what somebody else may see if Facebook finds they need help.

Picture: Facebook

The technology, announced represents an unparalleled opportunity. Before the AI tool was even publicly announced, Facebook used it to help dispatch first responders in 100 “wellness checks” to ensure a user’s security. The program’s life-saving potential is enormous, but many details won’t be shared by the corporation about how it works or if it’ll widely discuss its findings.  

That is bound to leave some specialists in the field confused and worried.  

Munmun De Choudhury commends the social networking firm for focusing on suicide prevention, but she’d like Facebook to be more transparent about its own algorithms.  

“This isn’t just another AI instrument — it tackles a really sensitive issue,” she said. “It’s an issue of someone’s death and life.”  

“This isn’t just another AI instrument — it tackles a really sensitive issue. It’s an issue of somebody’s death and life.”  

Facebook knows the stakes, which explains its own VP of product management, Guy Rosen, emphasized in an interview how AI significantly hastens the process of identifying distressed users and getting them resources or help.  

But he declined to talk in-depth concerning the algorithm’s variables beyond a few general examples, such as worried opinions from family and friends, the time of day, and also the text at a user’s post. Rosen said the firm, which has partnerships with organizations, wants to learn from researchers, but he wouldn’t discuss how or if Facebook could publish or share insights from its own use of AI.  

“We want to be very open about this,” he said.  

While may not function as Facebook’s power, in a field such as suicide prevention it could help other specialists save lives by revealing behavior or language patterns that arise prior to suicidal thinking or even a suicide attempt. With more than two billion customers, Facebook arguably has the largest database of such material on earth.  

De Choudhury says transparency is vital in regards to AI because provides confidence, as individuals fret about technology’s potential to fundamentally disrupt their professional and private lives a sentiment that is in short supply. Without enough confidence in the tool, says De Choudhury users may decide against sharing posts that are emotionally vulnerable or suicidal.  

When a message is received by users from Facebook, it does not indicate that AI recognized them as higher risk. Instead, they’re told that “someone thinks you may need extra support right now and asked us to help.” That somebody is.

It’s also currently impossible to understand how the AI determines that someone is at imminent risk, the accuracy of the algorithm, when looking for clues of thinking, or it makes mistakes. They have no way of telling Facebook that they were identified by it as suicidal because users won’t understand they had been flagged by AI.  

De Choudhury’s research entails analyzing websites to glean information about people’s mental and psychological wellbeing, so that she knows the struggles of both creating an algorithm and deciding which data to publish.  

She admits that a delicate balance needs to strike. By focusing on key words or other signals of distress certain aspects for example, could lead users to oversimplify suicide threat. And it might give data points identify those with health issues that are perceived that they could utilize to analyze networking posts, and target them to individuals with poor intentions.  

“I think discussing how the algorithm works, even when they don’t disclose each excruciating detail, could be really valuable.”  

Facebook also faces a different set of anxieties and expectations . Its suicide prevention AI instrument intellectual property developed for the public good may be considered by it. It may want to use features of the property to improve its offerings to advertisers and entrepreneurs; after all, pinpointing a user’s emotional state is something that might be valuable to Facebook’s marketplace competitiveness. The business has previously voiced interest in creating this ability.  

Whatever the case, De Choudhury argues that Facebook can contribute to use networking to understand suicide.  

“I believe academically sharing how the algorithm works, even though they don’t disclose every excruciating detail, could be really valuable,” she says, “…because right now it’s really a black box.”    

Crisis Text Line, which partnered with Facebook support to customers and to provide suicide prevention tools, shares its findings with the public and researchers — and does utilize AI to determine people’s suicide threat.  

“Together with the scale of data and number of individuals Facebook has in its own system, it might be an incredibly valuable dataset for academics and researchers to understanding suicide threat,” said Bob Filbin, ‎main data scientist for ‎Crisis Text Line.  

Filbin did not understand Facebook was developing AI to predict suicide threat but he said that Crisis Text Line is a proud partner and eager to work with the enterprise to prevent suicide.  

Crisis Text Line trains advisers to deescalate texters out of “hot to cool” and utilizes original responders as a final resort. By analyzing the user’s posts, Facebook’s human reviewers confirm the detection of risk of the AI. They provide contact emergency services and tools when needed, but don’t further engage the user.  

Facebook’s AI to pick up on various signals than that which surfaces in the data of Crisis Text Line is expected by filbin. Do so looking for help and for that reason may be more explicit in how they communicate thoughts and feelings.  

One simple example is how texters at greater risk of suicide say that they “need” to speak to a counselor. That urgency — compared to “desire” — is just one variable that the line’s AI uses to make a decision about risk. Another is that the word “ibuprofen,” which Crisis Text Line discovered is 16 times more likely to predict the individual texting demands emergency services than the word suicide.  

Filbin Stated that Crisis Text Line’s algorithm can identify 80 percent of text talks that end up requiring an emergency response.    

That is the kind of penetration that therapists, counselors, and doctors hope to one day have. It is clear that Facebook, by virtue of its massive size and dedication to suicide prevention, is  leading the attempt to somehow put that knowledge into the hands.  

Whether or not Facebook takes that place is a matter that the corporation would rather not reply. At some point, though, it will not have any other option.

If you want to talk to someone or are experiencing suicidal ideas, text that the Crisis Text Line  in 741-741 or call the National Suicide Prevention Lifeline in 1-800-273-8255. Here’s a list of international resources.  

Read more: http://mashable.com/

Related Post

Most Popular

To Top