Okay, we have no BotLords--yet--but are the bots killing democracy softly with their code? That is the question. Still, I had to get you in here somehow, and BotLords are sexier.
Carnival barking is necessary for media in any age to draw in readers and viewers, so invoking dystopia or the apocalypse in a story about the emerging network of bots greasing the wheels of social media is the common sense choice. Still, as Anne Applebaum illustrates this week in her piece at the Washington Post, Maybe the A.I. Dystopia Is Already Here, her frame foregrounds the wrong question, buries the lead, and ultimately distorts sense by perhaps burying the real story in the two magnificent links included in her article.
“Is THIS really the coming Robot Apocalypse?” Is the wrong question to ask when discussing the dangers posed to democracy by the practice of “astroturfing,” and the research presented here by Nimmo and Bradshaw and Howard is more exciting than the frame she places on it.
As a matter of fact, I could see how when reading this story, if you soon find out the answer to that question is, “No,” or at least, “Not now,” it’s easy to miss the real story here. It’s easy to check out—call me when the real Botocalypse begins—and miss better questions, such as:
“Is it already deteriorating the authenticity of democracy?” and “WTF can we be doing about it now?” Are just two examples of questions that would have been more in the ball park.
Manipulation of Social Media and Public Opinion By Bots
Bots play an increasingly integral role in online communication, be it for marketing, political, or foreign influence purposes, and they do pose a danger of taking on a life of their own.
… a bot really is just a piece of computer code that can do things that humans can do. Wikipedia uses bots to correct spelling and grammar on its articles; bots can also play computer games or place gambling bets on behalf of human controllers. Notoriously, bots are now a major force on social media, where they can “like” people and causes, post comments, react to others. Bots can be programmed to tweet out insults in response to particular words, to share Facebook pages, to repeat slogans, to sow distrust.
Anne Applebaum Maybe the A.I. Dystopia Is Already Here WaPo
In one of the articles Applebaum links to, Ben Nimmo, Senior Fellow for Information Defense at the Atlantic Council’s Digital Forensic Research Lab, who provides a comprehensive analysis of a recent astroturfing campaign in Poland.
On July 22, a crop of hashtags suddenly began trending on Twitter in Poland, all attacking the thousands of demonstrators who have been protesting against a proposed reform to the Supreme Court.
Ben Nimmo Atlantic Council’s Digital Forensic Research Lab
I just had round two of three of dental surgeries yesterday—with more to follow (so do cry for me Argentina)—and I’m on painkillers, so I glazed over at some of the charts of tweet timelines for the attack, graphs of Twitter user statistics, gripping analysis of time stamps showing five users posting the same words in the same second, only one of them a retweet, proceding to a cascading proliferating multitude of automated users flooding the Polish Twitterverse:
Perhaps most strikingly of all, out of the more than 15,000 tweets which used the two hashtags, over 11,000 used the exact phrase “Sprzeciwiam się wykorzystaniu mechanizmu” (“I am against the use of the mechanism”).
“Astroturfing” is a term used to describe fake grassroots movements that may, in fact, be directed by a single coordinating actor.
Ben Nimmo Atlantic Council’s Digital Forensic Research Lab
Suffice it to say, I am against the use of the mechanism, too, so you can see how infectious this is.
Nimmo’s analysis breaks down the entire chain. It would be a fascinating read were I not feeling like Rocky on the ropes from the pain and the medication.
The hashtags #AstroTurfing, #StopAstroTurfing, and #StopNGOSoros appear to have been driven by a very simple recipe. A number of activist accounts tweeted them repeatedly, most often using exactly the same turn of phrase. Those accounts then retweeted one another, and a shell of apparently automated accounts amplified all of them, creating explosive growth.
Ben Nimmo Atlantic Council’s Digital Forensic Research Lab
The fact that this particular assault was not successful is cold comfort. Like discovering you live atop a rattlesnake breeding ground and whistling cheerful tunes just because you haven’t been bitten yet:
However, the hashtags failed to create a self-sustaining trend. By 22:00, traffic had dropped back to almost zero. The hashtags were not picked up by a wider pool of users; they did not survive once the initial blast of amplification was over.
Ben Nimmo Atlantic Council’s Digital Forensic Research Lab
So, this particular rogue bot-wave sputtered out before reaching the shorelines of potentially gullible humans, who may have carried that momentum forward across the political landscape.
(Rest assured Death will come for me someday, so don’t be too mad about that sentence.)
Aren’t We Just Killing Ourselves Softly With Our Memes?
The Death of Irony
The irony of Nimmo’s ultimate finding is summed up by Applebaum:
An investigation by the Atlantic Council’s Digital Forensic Research Lab pointed out the irony: An artificial Twitter campaign (in Poland) had been programmed to smear a genuine social movement by calling it . . . artificial.
Anne Applebaum Maybe the A.I. Dystopia Is Already Here WaPo
Would You Please Kill Us All Softly With Your Song?
The meatiest link, though, seems to be to Samantha Bradshaw and Philip N. Howard, researchers from the University of Oxford, and their global inventory, “Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation”, which I have NOT yet read, but it seems packed with information, including a break down of the types of bots used by different nations around the world.
EDIT: My apologies, but the link seems to be blocked, here, but if that’s what you are experiencing, find the link inside Applebaum’s Washington Post article which I link to at the top of this article.
Some governments employ only positive bots, if you will, pushing positive messages; others get nasty. (My wording, not the two Oxford professors.)
Now, I’m not a Luddite, yet I haven’t been accused of being exactly “fluent in the cyber,” either; so anything I read in this report will merely feed the fever swamp of my own mind, but I share it here in hopes that one or more of you may make use of it, perhaps produce a diary if it turns out to be of any interest.
It seems to have 11 pages of references for 24 pages of text, so yummy reading.
Peanut Butter In The Chocolate, Chocolate In The Peanut Butter
Burying the lead on technical (and other science) stories beneath myth is nothing new. Doing so persistently runs the danger of inadvertently hiding the information (or the relevance of the information) by burying it under a mountain of mythical ideas and emotional memory associated with those apocalyptic fears that make apocalyptic tales so neurologically dealing in the first place.
Injecting “the wrong” myth into the story `a priori’, attaches those emotional associations we hold about the myth or story, inadvertently distorting the very stories Applebaum gets really close to sharing with her readers.
This meme of dystopian expectations regarding technology is ONE option, and it’s partly relevant here, but it’s currently used for nearly all things tech, so it gets into everything. It “lives” in all of us (statistically speaking, individual responses may vary), and it serves a useful purpose, of course, but we use it too much and it obscures more than it clarifies.
The AI takeover narrative in science fiction or horror, where it forms a repository for both our fears and our catharsis of those fears, happens to be a completely valid source of concern.
The “Titanic Tech Superstars Debate AI” Tour traipsing across our Digital Commons this very week, the story of Elon Musk and Mark Zuckerberg and their current debate on the subject, is an example of how you could attach this meme appropriately to an article right now (Link to The Atlantic), if you just need to get your apocalyptic jones on, journalists.
It may also do harm there, of course, which is a problem with applying narratives anywhere, but at least it’s somewhat emotionally aligned with the stakes of the real scientific debate going on over there, appropriate to the stakes of the eventual outcome to the question being debated.
I acknowledge in this case, some of the immediate implications sound dystopian:
But no one is really able to explain the way (bots) all interact, or what the impact of both real and artificial online campaigns might be on the way people think or form opinions.
Anne Applebaum Maybe the A.I. Dystopia Is Already Here WaPo
But I think by it’s very nature, “dystopian” comes to mean “the future,” even if we may be living inside what we may feel is dystopian now, and I think it also denotes a certain helplessness in the face of something so large, so “apocalyptic.” As myth goes, it feels like the Singularity, something beyond which none of our current calculations make sense, and facing something so incalculably impenetrable as that robs us of our agency.
Or it puts some in the mood to dismiss the story as silliness. Either way, it misdirects, and doesn’t serve to focus in on the story at hand very well.
In the end, I wish “we” could find myriad better myths, better stories, to filter current-day technology and science stories through, in this case one that doesn’t inflict the reader with a case of despair and “learned helplessness” from the start. Gives us a sense that it’s a problem we can solve, rather than automatically a problem that is going to lead to the end of the world.
The right myth for the right science or tech story. We need a MythScienceMatch.com.