Late last week, I finally had a long-awaited phone conversation with two senior policy-team members at Twitter to discuss my suspension from their platform, then well into its second week, for using the image from the cover of my book on the radical Right, Alt-America, featuring stars from the American flag converted into KKK hoods.
The whole point of the suspension was my refusal to change the profile picture as a matter of principle: I have seen too many of my fellow journalists suspended from the platform simply for doing their jobs. I chose not to budge as a way of calling attention to the stupidity of Twitter’s algorithms, and their inability to distinguish between genuine hate speech and hate-group recruitment and the journalism and activism that exposes and reports on it.
By the end of my conversation, I was partially satisfied that at least Twitter’s policymakers are indeed aware of the problem and are working to overcome it—which included their plea to me to participate in their efforts to do so. At the end of the day, I’m focused on actually making a difference in the real world as opposed to symbolic victories, so I compromised a bit, since it was also clear they were going to firmly support the decision to suspend me.
As they explained it to me, the problem with the image in my profile was that it was ambiguous enough in its intent that they couldn’t be assured that people using the site wouldn’t flee in fear at the sight of my little Klan-hooded stars. I reiterated to them my view that this concern was mostly ludicrous, though I really couldn’t disagree that a right-wing extremist might be able to hijack it and use it for his own purposes. The problem for them wasn’t the image itself but rather its isolation: If it appeared in a way that provided better context, then they were fine with it.
I knew all along that a simple adjustment to the image would solve the short-term problem of my suspension; my abiding concern was and is that working journalists should not have to contend with limitations on their work because of short-sighted algorithms that could punish them accidentally for simply doing their jobs. And as most of you know, I am fully supportive of the efforts of social-media platforms to remove voices of hate—indeed, I believe it’s an urgent and vital matter. In the end, it felt to me that I was more likely to achieve that objective by working with Twitter rather than against them.
Part of that feeling came from the general tenor of the conversation, which was cooperative and genuinely interested in my input. These Twitter officials were able to persuade me, at least, that they very much share my concerns, particularly when it comes to the ability of white nationalists to game the system to their advantage, as well as with the importance of supporting the work of legitimate journalists in this field. I think Twitter is perfectly aware that a lot of their traffic is driven by the work of journalists who use the platform not just to promote their own work and that of others in their field, but also to communicate with one another. They really don’t want to screw that pooch.
They mostly assuaged my top concern—namely, that my actual on-scene reportage from far-right events (I’m both a reporter and a photographer) often features extremely hateful and fear-inducing images, precisely because I am doing my best to accurately report on what happens with them. My investigations of hate groups and reportage on their operations often features both hateful images and words because that is what these groups use, and the public has to be informed accurately about these activities and ideologies.
My concerns about the dominance of algorithms in their process were somewhat assuaged: According to both these officials, they simply use the algorithms to flag content, but the decision to issue a suspension is always made by a human being who ostensibly has been trained in how to handle these matters.
I was assured that profile photos actually hold a different, more elevated status at Twitter because they are so often people’s gateways to content on mobile devices, and so the standards leading to suspension with them are tougher and stricter. I was told that it was extremely unlikely that I would be suspended for tweets containing such content, because the context is usually more self-evident there.
Not that I was fully satisfied—not really even close to it. Their protestations still did not explain why a reporter like Michael Edison Hayden would face even a momentary suspension, let alone one that lasted two days, for his reportage from the SPLC.
Most of all, they couldn’t really explain why and how an account featuring a well-known (but ambiguous) hate symbol—namely, Pepe the Frog—in its profile and its avatar could report me and have me suspended, while that account remained untouched. I was told that this account’s posts weren’t notably hateful (truthfully, they were more in the vein of trolling than of overt white nationalist hate-mongering), so given that context, they chose not to act. That, however, fails to explain why—given that a quick perusal of my timeline would reveal that not only does my work not promote hate speech, it actively opposes it—I wasn’t afforded the similar benefit of the context.
This leads us to what I see as really the abiding problem for Twitter, and for all the social-media platforms, particularly YouTube and Facebook: wildly inconsistent and frequently wrongheaded enforcement of their rules. Twitter’s reassurances otherwise, it is painfully clear that many of the people in charge of making these decisions are either horribly trained or are ideologically ill-suited for the task. Let’s be clear: Hiring libertarians, particularly those inclined to equate hate speech with speech opposing it, for these tasks is a recipe for disaster. So is hiring people who have no idea that fascists are not liberals, or that neo-Nazis are no different than militia “Patriots.” Proper, thorough, and effective training is absolutely essential for these platforms to achieve their goals of ensuring community safety and having a welcoming platform reflective of an open democratic society, and its accompanying marketplace of ideas.
So I have accepted their invitation to be part of these future conversations. In a gesture of compromise, I adapted the image in my profile picture (one I had been hankering to make, to be perfectly honest, ever since the release of Alt-America in a paperback with a somewhat different cover) so that, while the Klan-hooded stars are still perfectly visible, they’re accompanied by the context of the book’s title—and I added a portion of the paperback cover as well. Context, context, context. I’m good with that.
But yes, I’m back on Twitter now. (Ironically, it’s happening just before I take a long-planned vacation overseas to celebrate my 30th anniversary with my amazing wife, so I will not be very active either there or here for awhile.)
However, I remain deeply concerned and skeptical that the good intentions of the Twitter policy team will not be able to rise to this occasion, which is fraught with all kinds of dangers, particularly that of the unchecked spread of hateful ideologies.
Here’s what I can promise: My future reportage will remain unaffected by Twitter’s rules, and I will keep doing what I do well. If that results once again in a suspension—especially one that runs counter to the assurances I’ve been given—then we will have this fight all over again. For now, I’m hopeful that we can proceed in good faith.