Mind Matters Natural and Artificial Intelligence News and Analysis
Phrenology Head Busts

Big Question: Can Big Data Read the Minds of Others?

And should Facebook scan your posts for suicidal thoughts? (It does.)
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 Neurologist Robert Burton reflects at Aeon on the fact that mind reading does not really work. Most fashionable theories of mind, like the mirror neuron theory, have not really been much use:

This is not to say that we have no idea of what goes on in another’s mind. The brain is a superb pattern-recogniser; we routinely correctly anticipate that others will feel grief at a funeral, joy at a child’s first birthday party, and anger when cut off on the freeway. We are right often enough to trust our belief that others generally will feel as we do. More.

True, but the problem isn’t with recognizing what most people probably think; it’s with recognizing unusual but important patterns. How good are our theories at recognizing whether a person will become violent, for example?:

In 1984, The American Journal of Psychiatry reported that psychiatrists and psychologists were vastly overrated as predictors of violence. Even in the best of circumstances – with lengthy multidisciplinary evaluations of persons who had already manifested their violent proclivities on several occasions – psychiatrists and psychologists seemed to be wrong at least twice as often as they were right when they predicted violence. Nevertheless, the article suggested that new methodologies might improve prediction rates.

No such luck. Thirty years later, a review article in The British Medical Journal concluded that: ‘Even after 30 years of development, the view that violence, sexual or criminal risk can be predicted in most cases is not evidence-based.’ Despite being the co-developer of a widely used evaluation tool for violence risk-assessment, the psychologist Stephen D Hart at Simon Fraser University in Canada is equally pessimistic. ‘There is no instrument that is specifically useful or validated for identifying potential school shooters or mass murderers. There are many things in life where we have an inadequate evidence base, and this is one of them.’

However, an interesting thing happened with suicide prediction, where professionals were equally bad at prediction. Researchers decided to use Big Data instead, says Burton:

Scientists at Vanderbilt University Medical Center in Tennessee gathered data on more than 5,000 patients with physical signs of self-harm or suicidal ideation. By gathering up readily available impersonal healthcare data such as age, gender, zip codes, medications and prior diagnoses, but without directly interviewing the patients, there was 80-90 per cent accuracy when predicting whether someone would attempt suicide within the next two years, and 92 per cent accuracy in predicting whether someone would attempt suicide within the next week. When assessing the likelihood of suicide of 12,695 randomly selected hospitalised patients with no documented history of suicide attempts, the group was able to achieve even higher rates of prediction. With such results, we shouldn’t be surprised that Facebook has introduced its own proprietary AI system to detect those at increased risk of suicide.

The reason researchers had better luck with Big Data is probably the fact that people can say whatever they want in an interview but mental issues and social and economic factors are critical influences on whether a person might be tempted by self-harm. Thus, one need not develop a mind-reading machine in order to assess the risk. Knowing that a person has a number of such risk factors may be all that’s needed.

And now Facebook? From Jordan Novet at CNBC,

About a year ago, Facebook added technology that automatically flags posts with expressions of suicidal thoughts for the company’s human reviewers to analyze. And in November, Facebook showed proof that the new system had made an impact.

“Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts,” the company said in a blog post at the time.

Facebook now says the enhanced program is flagging 20 times more cases of suicidal thoughts for content reviewers, and twice as many people are receiving Facebook’s suicide prevention support materials. The company has been deploying the updated system in more languages and improving suicide prevention in Instagram, though tools there are at an earlier stage of development. More.

Well, it’s taken for granted that if a person is actually talking about suicide, the risk should be taken seriously. But consider the precedent set by Facebook reading users’ mail. From Srini Pillay at Forbes,

In response to this global epidemic, Facebook recently announced that it is coming out with artificial intelligence to detect suicidal posts. The AI looks for words that have been associated with suicide risk, and comments such as “Are you OK?” or “Do you need help?” It then sends resources and friends to the user if it finds it necessary. This might seem like a promising advancement in suicide prevention, but without asking the right questions and vetting all of the stakeholders, AI could do more harm than good. … For example, some mental health experts voice that hearing from family and loved ones that they care about you can help with suicide prevention. But this can backfire if friends and family are the cause of the distress. More.

True, and there’s also the question of whether a social media company should have more reach in these matters than any government. One thing we can be sure of: To the extent that Facebook’s suicide prevention program succeeds, we can expect the technology, seen as laudable, to be aimed at other issues, however defined and by whom.

Twitter’s CEO, Jack Dorsey, has just announced that

A.I. may soon be used to help determine which users are “credible voices” that should be regarded highly.

So we have gone from preventing suicides to making claims about credibility in a few short months?

How surprised will anyone be if it turns out that Twitterbot’s assessment is similar to Jack Dorsey’s?

One thing most of these social media firms need very badly is competition. Then we can crowdsource the question of who is credible.

See also: AI can mean ultimate Big Surveillance: That’s what we should really worry about. And the personalities behind these surveillance efforts are not advanced artificial entities but the usual suspects, armed with the usual good intentions.

SaveSave

SaveSave


Big Question: Can Big Data Read the Minds of Others?