The Samaritans, a well-known suicide-prevention group in Britain, recently introduced a free web app that would alert users whenever someone they followed on Twitter posted worrisome phrases like “tired of being alone” or “hate myself.”A week after the app was introduced on its website, more than 4,000 people had activated it, the Samaritans said, and those users were following nearly 1.9 million Twitter accounts, with no notification to those being monitored. But just about as quickly, the group faced an outcry from people who said the app, called Samaritans Radar, could identify and prey on the emotionally vulnerable — the very people the app was created to protect.“A tool that ‘lets you know when your friends need support’ also lets you know when your stalking victim is vulnerable #SamaritansRadar,” a Briton named Sarah Brown posted on Twitter. A week and a half after the app’s introduction, the Samaritans announced it was reconsidering the outreach program and disabled the app.
Social media posts offer a vast array of information — things as diverse as clues about the prevalence of flu, attitudes toward smoking and patterns of prescription drug abuse. Academic researchers, often in partnership with social media platforms, have mined this data in the hopes of gaining more timely insights into population-scale health trends. The National Institutes of Health, for instance, recently committed more than $11 million to support studies into using sites like Twitter and Facebook to better understand, prevent and treat substance abuse.
A handful of research and nonprofit groups are analyzing social media postings with the aim of detecting and predicting patterns in mental health conditions. The experience of the Samaritans highlights the perils involved.“Social media and discussion websites are producing data sources that are revolutionizing behavioral health research,” said Mark Dredze, an assistant research professor of computer science at Johns Hopkins University who studies social media and health. “You can expect to see tremendous results.”
Translating this population-level data into health predictions and interventions for individuals is fraught. To some leading psychiatrists, the notion of consumer apps like Samaritans Radar that would let untrained people parse the posts of individual friends and strangers for possible mental health disorders amounts to medical quackery.For one thing, said Dr. Allen J. Frances, a psychiatrist who is a professor emeritus at Duke University School of Medicine, crude predictive health algorithms would be likely to mistake someone’s articulation of distress for clinical depression, unfairly labeling swaths of people as having mental health disorders.For another thing, he said, if consumers felt free to use unvalidated diagnostic apps on one another, it could potentially pave the way for insurers and employers to use such techniques covertly as well — with an attendant risk of stigmatization and discrimination.“You would be mislabeling millions of people,” Dr. Frances said. “There would be all sorts of negative consequences.” He added, “And then you can have sophisticated employment consultants who will do the vetting on people’s psychiatric states, derived from some cockamamie algorithm, on your Twitter account.”