Mind Matters Natural and Artificial Intelligence News and Analysis
craig-sybert-774140-unsplash
Retro robots
Photo by Craig Sybert on Unsplash

Does Democracy Demand a War on Twitterbots?

A key concern is that citizens could be induced to vote for a demagogue by Twitterbots spreading fake news.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Related image A Twitterbot is a software program that sends out automated posts on Twitter. Until quite recently, Twitter provided instructions freely on how to create these bots, which could be used to reply to messages or to promote a brand or a politician. Reasonable estimates suggest that about 15% or 48 million Twitter followers are not people.

The bots have become very unpopular, emblemizing everything people hate about Twitter. So, the Washington Post tells us, the company is now cracking down: “Twitter suspended more than 70 million accounts in May and June, and the pace has continued in July”:

The rate of account suspensions, which Twitter confirmed to The Post, has more than doubled since October, when the company revealed under congressional pressure how Russia used fake accounts to interfere in the U.S. presidential election…

“One of the biggest shifts is in how we think about balancing free expression versus the potential for free expression to chill someone else’s speech,” [Twitter’s vice president for trust and safety Del] Harvey said. “Free expression doesn’t really mean much if people don’t feel safe.” Craig Timberg and Elizabeth Dwoskin, “Twitter is sweeping out fake accounts like never before, putting user growth at risk” at Washington Post

But there is a whole other side to the bots that is at least worth knowing about, a side that one hopes will live on. Here are some bots:

@NiemanLabFuego finds and retweets the stories that most journalists are discussing.

@PDcutup, which creates collages out of two public domain images from two different institutions based on similarity in their titles

@trackthis tracks parcels for followers and sends them direct messages each time a package’s location changes.

@EarthquakesSF tweets about earthquakes in the San Francisco area in real time, using seismographic information from the USGS.

bishop peaches, by deborah griscom passmore, 1910/@pomological

One bot sends out graphics of old pictures of fruit, and another sends out old pictures from the New York Public Library, probably all in the public domain.

Many of the people who learned to create Twitter bots seem fairly mainstream: “I made a Twitter bot!”, one announces proudly, as he recounts the way he used a coding course he was taking to learn to tweet old pictures contributed by the Sunshine State Digital Network (SSDN) to the Digital Public Library of America.

Developer Scott Spence explains “In the case of my @ScottDevTweets bot, it’s usually an opener for a conversation with another person who follows me. So the bot can initiate the conversation, then I can carry on from where the bot left off.”  Rabbi turned coder Ben Greenberg created a bot “that responds to the @realdonaldtrump Twitter account by posting a mass shooting statistic from this year with data from the Gun Violence Archive.”

There is a definitive guide: “How to Make a Twitter Bot: The Definitive Guide”. There is even an account that enables users to report bots for research purposes.

Some worry that the crackdown will make Twitter a duller place:

I’ve been following creative bot accounts for years. They make my Twitter feed weirder and funnier, a place of ontological ambiguity where tweets from journalists and politicians are interspersed with moments of random, computational beauty.

Many of these delightful and creative accounts will disappear in the coming months due to a company-wide attempt to eradicate malicious bots from the platform. Though this is a well-intentioned effort to curb computational propaganda, it will likely sweep up art bots in its wake. Oscar Schwartz, “Your favorite Twitter bots are about die, thanks to upcoming rule changes” at Quartz

As the Washington Post reporters noted above, a key concern is that citizens could be induced to vote for a demagogue by Twitter bots spreading fake news:

One strategy is to heavily promote a low-credibility article immediately after it’s published, which creates the illusion of popular support and encourages human users to trust and share the post. The researchers found that in the first few seconds after a viral story appeared on Twitter, at least half the accounts sharing that article were likely bots; once a story had been around for at least 10 seconds, most accounts spreading it were maintained by real people.

“What these bots are doing is enabling low-credibility stories to gain enough momentum that they can later go viral. They’re giving that first big push,” says V.S. Subrahmanian, a computer scientist at Dartmouth College not involved in the work.

The bots’ second strategy involves targeting people with many followers, either by mentioning those people specifically or replying to their tweets with posts that include links to low-credibility content. If a single popular account retweets a bot’s story, “it becomes kind of mainstream, and it can get a lot of visibility,” Menczer says. Maria Temming, “How Twitter bots get people to spread fake news” at Science News

A recent study in Nature Communications raised the alarm:

Abstract: The massive spread of digital misinformation has been identified as a major threat to democracies. Communication, cognitive, social, and computer scientists are studying the complex causes for the viral diffusion of misinformation, while online platforms are beginning to deploy countermeasures. Little systematic, data-based evidence has been published to guide these efforts. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during ten months in 2016 and 2017. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, resharing content posted by bots. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation. (open access) The spread of low-credibility content by social bots Chengcheng Shao, Giovanni Luca Ciampaglia, Onur Varol, Kai-Cheng Yang, Alessandro Flammini & Filippo Menczer, Nature Communications volume 9, Article number: 4787 (2018)
More.

In July, Twitter announced a “comprehensive vetting process” which requires the prospective bot artist to ask for a formal developer account in order to gain access to the back-end interfaces (APIs) that let bots receive data. One outcome of this new professionalization approach may be that the Cute Cat of the Day is history but the Troll Genius of St. Petersburg still unleashes bots at will, stumping his opposite numbers at Twitter. See, for example, How the KGB Found CIA Agents.

Underlying much of the angst about the political impact of bots is a basic premise: Most of us need help thinking for ourselves and protection from the many bad influences that we are not able to recognize, the way our betters can. If that sort of paternalism bothers a social media user, it might be best to limit the time spent on Twitter. It’s bound to be reflected in other decisions as well.

See also: AI Social Media Could Totally Manipulate You Deep learning specialist: And the scary thing is, the AI needed is not especially advanced

Twitter doesn’t just seem out of control. It actually is.


Does Democracy Demand a War on Twitterbots?