AI Is Turning Social Media Into the Next Frontier for Suicide Prevention

“We stumbled upon your post…and it looks like you are going through some challenging times,” the message begins. “We are here to share with you materials and resources that might bring you some comfort.” Links to suicide help lines, a 24/7 chat service, and stories of people who overcame mental-Health crises follow. “Sending you a virtual hug,” the message concludes.

This note, sent as a private message on Reddit by the artificial-intelligence (AI) company Samurai Labs, represents what some researchers say is a promising tool to fight the suicide epidemic in the U.S., which claims almost 50,000 lives a year. Companies like Samurai are using AI to analyze social media posts for signs of suicidal intent, then intervene through strategies like the direct message.

There is a certain irony to harnessing social media for suicide prevention, since it’s often blamed for the mental-health and suicide crisis in the U.S., particularly among children and teenagers. But some researchers believe there is real promise in going straight to the source to “detect those in distress in real-time and break through millions of pieces of content,” says Samurai co-founder Patrycja Tempska.

Samurai is not the only company using AI to find and reach at-risk people. The company Sentinet says its AI model each day flags more than 400 social media posts that imply suicidal intent. And Meta, the parent company of Facebook and Instagram, uses its Technology to flag posts or browsing behaviors that suggest someone is thinking about suicide. If someone shares or searches for suicide-related content, the platform pushes through a message with information about how to reach support services like the Suicide and Crisis Lifeline—or, if Meta’s team deems it necessary, emergency responders are called in.

Underpinning these efforts is the idea that algorithms may be able to do something that has traditionally stumped humans: determine who is at risk of self-harm so they can get help before it’s too late. But some experts say this approach—while promising—isn’t ready for primetime.

“We’re very grateful that suicide prevention has come into the consciousness of society in general. That's really important,” says Dr. Christine Moutier, chief medical officer at the American Foundation for Suicide Prevention (AFSP). “But a lot of tools have been put out there without studying the actual results.”


Predicting who is likely to attempt suicide is difficult even for the most highly trained human experts, says Dr. Jordan Smoller, co-director of Mass General Brigham and Harvard University’s Center for Suicide Research and Prevention. There are risk factors that clinicians know to look for in their patients—certain psychiatric diagnoses, going through a traumatic event, losing a loved one to suicide—but suicide is “very complex and heterogeneous,” Smoller says. “There’s a lot of variability in what leads up to self-harm,” and there’s almost never a single trigger.

The hope is that AI, with its ability to sift through massive amounts of data, could pick up on trends in speech and writing that humans would never notice, Smoller says. And there is Science to back up that hope.

More than a decade ago, John Pestian, director of the Computational Medicine Center at Cincinnati Children’s Hospital, demonstrated that machine-learning algorithms can distinguish between real and fake suicide notes with better accuracy than human clinicians—a finding that highlighted AI's potential to pick up on suicidal intent in text. Since then, studies have also shown that AI can pick up on suicidal intent in social-media posts across various platforms.

Companies like Samurai Labs are putting those findings to the test. From January to November 2023, Samurai’s model detected more than 25,000 potentially suicidal posts on Reddit, according to company data shared with TIME. Then a human supervising the process decides whether the user should be messaged with instructions about how to get help. About 10% of people who received these messages contacted a suicide helpline, and the company's representatives worked with first responders to complete four in-person rescues. (Samurai does not have an official partnership with Reddit, but rather uses its technology to independently analyze posts on the platform. Reddit employs other suicide-prevention features, such as one that lets users manually report worrisome posts.)

Co-founder Michal Wroczynski adds that Samurai's intervention may have had additional benefits that are harder to track. Some people may have called a helpline later, for example, or simply benefitted from feeling like someone cares about them. “This brought tears to my eyes,” wrote one person in a message shared with TIME. “Someone cares enough to worry about me?”

When someone is in an acute mental-Health crisis, a distraction—like reading a message popping up on their screen—can be lifesaving, because it snaps them out of a harmful thought loop, Moutier says. But, Pestian says, it’s crucial for companies to know what AI can and can’t do in a moment of distress.

Services that connect social media users with human support can be effective, Pestian says. “If you had a friend, they might say, ‘Let me drive you to the hospital,’” he says. “The AI could be the car that drives the person to care.” What’s riskier, in his opinion, is “let[ting] the AI do the care” by training it to duplicate aspects of therapy, as some AI chatbots do. A man in Belgium reportedly died by suicide after talking to a chatbot that encouraged him—one tragic example of technology’s limitations.

It’s also not clear whether algorithms are sophisticated enough to pick out people at risk of suicide with precision, when even the humans who created the models don’t have that ability, Smoller says. "The models are only as good as the data on which they are trained," he says. "That creates a lot of technical issues."

As it stands, algorithms may cast too wide a net, which introduces the possibility of people becoming immune to their warning messages, says Jill Harkavy-Friedman, senior vice president of research at AFSP. “If it’s too frequent, you could be turning people off to listening,” she says.

That’s a real possibility, Pestian agrees. But as long as there’s not a huge number of false positives, he says he’s generally more concerned about false negatives. “It’s better to say, ‘I’m sorry, I [flagged you as at-risk when you weren’t] than to say to a parent, ‘I’m sorry, your child has died by suicide, and we missed it,’” Pestian says.

In addition to potential inaccuracy, there are also ethical and privacy issues at play. Social-media users may not know that their posts are being analyzed or want them to be, Smoller says. That may be particularly relevant for members of communities known to be at increased risk of suicide, including LGBTQ+ youth, who are disproportionately flagged by these AI surveillance systems, as a team of researchers recently wrote for TIME.

And, the possibility that suicide concerns could be escalated to police or other emergency personnel means users “may be detained, searched, hospitalized, and treated against their will,” health-law expert Mason Marks wrote in 2019.

Moutier, from the AFSP, says there’s enough promise in AI for suicide prevention to keep studying it. But in the meantime, she says she’d like to see social media platforms get serious about protecting users’ mental health before it gets to a crisis point. Platforms could do more to prevent people from being exposed to disturbing images, developing poor body image, and comparing themselves to others, she says. They could also promote hopeful stories from people who have recovered from mental-health crises and support resources for people who are (or have a loved one who is) struggling, she adds.

Some of that work is underway. Meta removed or added warnings to more than 12 million self-harm-related posts from July to September of last year and hides harmful search results. TikTok has also taken steps to ban posts that depict or glorify suicide and to block users who search for self-harm-related posts from seeing them. But, as a recent Senate hearing with the CEOs of Meta, TikTok, X, Snap, and Discord revealed, there is still plenty of disturbing content on the internet.

Algorithms that intervene when they detect someone in distress focus “on the most downstream moment of acute risk,” Moutier says. “In suicide prevention, that’s a part of it, but that’s not the whole of it.” In an ideal world, no one would get to that moment at all.

If you or someone you know may be experiencing a mental-health crisis or contemplating suicide, call or text 988. In emergencies, call 911, or seek care from a local hospital or mental health provider.

Comments

Popular posts from this blog

Joyce Tsang of Stone & Star on Self-Care and Setting Intentions for the New Year

'Parrot fever' outbreak in 5 European countries kills 5 people

Untrained bystanders can administer drone-delivered naloxone, potentially saving lives of opioid overdose victims