So why are tech platforms like Twitter, YouTube and Facebook so vigilant about silencing Islamic terrorists, while they let right-wing white nationalist terrorists slide on through?
Twitter Suspends Only 7 White Terrorists for Every 1,100 ISIS Accounts
According to a George Washington University study comparing the Twitter activity of white nationalists, Nazis, and ISIS, white terrorist groups find a relative safe haven on the social media platform.
“White nationalist accounts suffered relatively little suspension pressure,” the study reads. “Three white nationalist accounts and four Nazi accounts were observed to be suspended during the course of data collection, and a handful of additional accounts were seen to be suspended in the days that followed. Around 1,100 ISIS accounts were suspended during and immediately after collection.”
And that Twitter activity is more than just obnoxious trolling. It’s attracting an expanding base of new extremists. “Major American white nationalist movements on Twitter added about 22,000 followers since 2012, an increase of about 600%,” the study says. “Followers of the Twitter handle @anp14, the American Nazi Party, increased more than 400% from 2012 to 2016.”
White Terrorism Follows in the Social Media Footsteps of ISIS
J.M. Berger, a fellow with George Washington University’s Program on Extremism and the author of the study, notes that ISIS’s use of social media paved the way for white nationalist groups.
“ISIS was the first [extremist] group to enjoy notable success promoting its cause and message on social media. Other extremist groups and movements are poised to follow in its footsteps… White nationalists are part of the second generation of extremist social media activism, and they have already enjoyed substantial gains as a result.”
But Twitter isn’t the only place harboring and empowering white nationalism. White nationalists love social media. The image of Pepe the Frog was intentionally linked to Nazi ideology by white supremacists on 4Chan. Even a simple Google search, depending on the results, can influence a user toward embracing white nationalism.
“When we talk about the white supremacist radicalization, we have to talk about the most basic pathways: Recommendations. Autofill. Search results,” says Lois Beckett of Guardian US.
NPR found that in December 2016 and January 2017, typing “b-l-a-c-k o-n” into a Google search box would elicit “black on white crime” as the number one autofill suggestion. This, they theorized, could have enabled the radicalization of the Charleston Church shooter at the very moment of his recruitment to white nationalism. (I’ve just tried this with Google and it’s no longer true, suggesting they’ve since changed their secret algorithm.)
Why Aren’t Tech Companies Doing More to Combat White Terrorists?
“It’s no secret where people are becoming radicalized,” says policy analyst and Black activist Samuel Sinyangwe, specifically calling out Google, Youtube, Facebook, Reddit, and Microsoft’s Xbox live. These companies “could conduct a systematic audit to identify all of the pathways to white supremacist content online and then replace these results/recommendations with information designed to debunk and deprogram white supremacist beliefs.”
We know they could, because they do it for Muslim extremist groups. Why don’t they, then? Sinyangwe cites a corporate “lack of courage or willingness” as the main obstacle.
Others suggest tech companies are kowtowing to governments by favoring white American terrorist groups. Hannah Bloch-Wehba, a professor at Drexel University’s Kline School of Law, and a fellow at Yale Law School’s Information Society Project, says “when platforms promise to be more ‘proactive’ in addressing violent threats, they mean only the violent threats that have already proven to be a priority for governments that are leaning on them.”
Big Tech Offers Milquetoast Responses
Twitter CEO Jack Dorsey is not in a hurry to solve his company’s Nazi problem, even as it’s negatively impacting share prices. His rambling responses are too long to print here, and don’t really say anything. Other platforms have offered up excuses about why they crack down on Muslim extremists more swiftly than others. YouTube said this week “Many violent extremist groups, like ISIS, use common footage and imagery. These can also be signals for our detection systems that help us to remove content at scale.”
As Wired points out, that’s a self-perpetuating cycle. These companies can target Muslim extremists more quickly in part because they have so much data about what Muslim extremist content looks like. Since they never bothered gathering that kind of data for white nationalist content, they’re now egregiously behind.
Whatever their reasons, it’s plain as day that social media in general, and Twitter specifically, has a serious terrorist problem. And the solution doesn’t seem to be coming from company leaders anytime soon.