François Chollet, a deep learning researcher at Google (which owns Deep Mind; whose AI program AlphaGo beat the world-ranked Go champion a couple of years ago) was featured here recently because he is skeptical of general superhuman artificial intelligence.
But that’s not the only problem he has been thinking about. He has also done some thinking about “the highly effective, highly scalable manipulation of human behavior that AI enables, and its malicious use by corporations and governments.” For example, social media companies, which have been recording everything you do, can show you mainly content that promotes ideas that medium owner wants you to have. If you express approved views, you will get likes that could be from bots. If you deviate, you could be shown mainly negative responses “(maybe acquaintances, maybe strangers, maybe bots)” on the theory that you will shut up or change your mind. In the social bubble, you may believe that the medium owner’s preferred views are far more prevalent than they are. Then there is Argument personalization:
… the algorithm may observe that exposure to certain pieces of content, among people with a psychological profile close to yours, has resulted in the sort of view shift it seeks. It may then serve you with content that is expected to be maximally effective for someone with your particular views and life experience. In the long run, the algorithm may even be able to generate such maximally-effective content from scratch, specifically for you. Francois Chollet, “What worries me about AI” at Medium
Readers familiar with The Two Towers will recall that that’s precisely what Worm-Tongue did to King Theoden: The king heard nothing but what was conveyed to him from his would-be overlord by Worm-tongue, until someone separated him, rather forcibly as it happens, from his “social medium of choice.”
But now, here’s the part that really concerns Chollet:
Remarkably, mass population manipulation — in particular political control — arising from placing AI algorithms in charge of our information diet does not necessarily require very advanced AI. You don’t need self-aware, superintelligent AI for this to be a dire threat — current technology may well suffice. Social network companies have been working on it for a few years, with significant results. François Chollet, “What worries me about AI” at Medium
He doesn’t want to give up on AI but urges the building of an “anti-Facebook” that gives the user more control.
That sounds like a promising idea. But, as so often, the new technology just puts a different face on an older problem. Most people who have shopped at grocery superstores already have some experience of a total immersion experience in food consumerism. Every detail of the environment is manipulated to engineer that light tap on the screen, preferably featuring three digits before the decimal place.
The details include theatrical lighting, counterclockwise floor plan, (because you are more likely to grab things with your right hand) and disorienting layouts that cause you to spend more time, thus probably money, there. You see? They thought of everything. It’s their business.
Yet we are not in a panic about supermarkets. We recognize, from long experience, that we are only as manipulated as we choose to be. No one forces us to buy into long-term health problems or even to shop at supermarkets at all. We could buy most of our food at local farmers’ markets.
While Chollet is doubtless right to be concerned, we mustn’t lose sight of the fact that we are just as free to limit and choose our contact with social media as we are to limit and choose our contact with the supermarket. As Dr. Michael Egnor says, free will is not a dangerous myth. It’s just what we need for the digital age.
See also: Software pioneer says general superhuman artificial intelligence is very unlikely
Is free will a dangerous myth? The denial of free will is a much more dangerous myth.
The “superintelligent AI myth” The problem that even the skeptical Deep Learning researcher left out (Brendan Dixon)