Saturday, December 27, 2014

Social Media and Telediagnoses

Risks in Using Social Media Used to Spot Signs of Mental Distress:

The Samaritans, a well-known suicide-prevention group in Britain, recently introduced a free web app that would alert users whenever someone they followed on Twitter posted worrisome phrases like “tired of being alone” or “hate myself.”

A week after the app was introduced on its website, more than 4,000 people had activated it, the Samaritans said, and those users were following nearly 1.9 million Twitter accounts, with no notification to those being monitored. But just about as quickly, the group faced an outcry from people who said the app, called Samaritans Radar, could identify and prey on the emotionally vulnerable — the very people the app was created to protect.

“A tool that ‘lets you know when your friends need support’ also lets you know when your stalking victim is vulnerable #SamaritansRadar,” a Briton named Sarah Brown posted on Twitter. A week and a half after the app’s introduction, the Samaritans announced it was reconsidering the outreach program and disabled the app.
I'm surprised the Samaritans, who have a stellar track record, didn't see that coming. An app, using some untested algorithms to try and spot suicide ideation, is beyond scary.

Yes, some academic research on social media and opinions or issues of the day is valid:
Social media posts offer a vast array of information — things as diverse as clues about the prevalence of flu, attitudes toward smoking and patterns of prescription drug abuse. Academic researchers, often in partnership with social media platforms, have mined this data in the hopes of gaining more timely insights into population-scale health trends. The National Institutes of Health, for instance, recently committed more than $11 million to support studies into using sites like Twitter and Facebook to better understand, prevent and treat substance abuse.

Facebook and OkCupid, a popular dating site, have also conducted experiments in which the companies manipulated content presented to their own members to study the impact on their behavior.
But the desire to try and start diagnosing people's psychological state, based on what they may or may not post on Twitter or Facebook or Instagram? That's the same kind of snake oil pedaled back when telediagnostics was first coined in the 1970's, when modern psychiatry promised to be able to diagnose people from what they saw on t.v. or videotape.

No doubt, watching someone on t.v. and saying "that person is crazy" makes for fun, snarky entertainment (and today, we "live tweet" the snark), but having a psychologist or psychiatrist do the same thing (or read your FB page or Twitter feed) and say, "yes, clearly this person is OCD from what I've read," and have that diagnosis be reported to your insurance company or employer? And follow you around the rest of your life?
A handful of research and nonprofit groups are analyzing social media postings with the aim of detecting and predicting patterns in mental health conditions. The experience of the Samaritans highlights the perils involved.

“Social media and discussion websites are producing data sources that are revolutionizing behavioral health research,” said Mark Dredze, an assistant research professor of computer science at Johns Hopkins University who studies social media and health. “You can expect to see tremendous results.”
Yeah, if by "tremendous" you mean amateur diagnoses, quackery, snake oil, stigmatization, shaming, and more sinisterly, social control of unpopular thoughts or feelings.
Translating this population-level data into health predictions and interventions for individuals is fraught. To some leading psychiatrists, the notion of consumer apps like Samaritans Radar that would let untrained people parse the posts of individual friends and strangers for possible mental health disorders amounts to medical quackery.

For one thing, said Dr. Allen J. Frances, a psychiatrist who is a professor emeritus at Duke University School of Medicine, crude predictive health algorithms would be likely to mistake someone’s articulation of distress for clinical depression, unfairly labeling swaths of people as having mental health disorders.

For another thing, he said, if consumers felt free to use unvalidated diagnostic apps on one another, it could potentially pave the way for insurers and employers to use such techniques covertly as well — with an attendant risk of stigmatization and discrimination.

“You would be mislabeling millions of people,” Dr. Frances said. “There would be all sorts of negative consequences.” He added, “And then you can have sophisticated employment consultants who will do the vetting on people’s psychiatric states, derived from some cockamamie algorithm, on your Twitter account.”
Precisely. There is a lot of bad on Twitter and all social media, believe me. I've unfollowed, blocked, rejected friend requests, deleted people, etc because I was appalled by what I've read.

But I'll take the crazy and bad and simply disgusting stuff I see on there sometimes over, "I better not say or post or retweet this because some insurance company hack shrink might think I'm crazy, cancel my coverage, and alert my employer."

That's a big brother, social control, slippery slope you don't want to start down.

No comments: