facebook-pixel

Candace Rondeaux and Heather Hurlburt: How Parler reveals the alarming trajectory of political violence

Biden administration needs a game plan to deal with an escalating threat from platform migration.

(Kevin Van Aelst | The New York Times) How Parler Reveals the Alarming Trajectory of Political Violence

Since the Jan. 6 siege of the Capitol in Washington, right-wing extremists on social media continue to glorify violence, draw new adherents and forge fresh plans for mayhem. This ominous activity presents an urgent threat to the security and social cohesion of the United States.

But there is another, less obvious takeaway: Experts know — or can know — an enormous amount about the nature and evolution of the threat.

Data sleuths have combed through a 70 terabyte cache of data from Parler, the now-defunct social media platform popular among the far right. Researchers have archived and mapped millions of these ethically hacked posts, wrangled by an anonymous, purportedly Austria-based hacker. The haul — potentially bigger than the WikiLeaks data dump of the Afghan War logs and the Democratic National Committee leak, combined — includes valuable evidence and planning of further attacks, mixed in with the private data of individuals who committed no crimes (along with quite a bit of pornography). The early takeaways are terrifying: According to at least one preliminary analysis, the frequency of hashtags on Parler referencing hanging or killing duly elected members of Congress more than doubled after the November elections.

Until the nation reckons with the self-inflicted wounds stemming from an under-regulated, unreformed social media information architecture, President Biden’s calls for healing and national unity won’t produce substantial, lasting results. The new administration needs a long-term plan to confront the escalating threat, as far-right insurgents migrate from one platform to the next.

The Parler hack is the place to start. It indicates that moderation of violent, racist, anti-democratic content will increasingly lead to migration of that same hateful content. For instance, the deplatforming of Parler triggered a virtual stampede to similar forums like Gab and Rumble. Analysts have already documented Parler groups re-forming and spreading evermore hateful content on Telegram and a host of smaller platforms.

As the Parler case study also showed, deplatforming also disappears valuable data. But extremists don’t just vanish — they tumble into “smaller and smaller rabbit holes,” in the words of researcher Peter Singer. Those rabbit holes make up a large, growing and uncontrollable far-right media universe.

Since a number of large tech companies stopped supporting Parler, it was resurrected in a new form — a landing page that promised a full return — on the same web hosting service that provides a platform for The Daily Stormer, a neo-Nazi message board. The move suggests that even if big tech giants get rid of toxic content, smaller companies will step in as safe havens.

That could change. There are mounting calls in Congress for investigations into the role played by Parler and other social media platforms in the siege in Washington. Inaction risks the very real possibility that countries like Russia may offer themselves as a web hosting alternative for violent anti-democratic factions inside the United States: Parler has reportedly engaged in business dealings with Russian-owned tech firms.

Deplatforming, then, must be coupled with better, faster and more comprehensive data collection and analysis. Facebook, Twitter and others must also be more transparent about preserving evidence of account takedowns, so disinformation researchers can put the pieces together for a public thirsty for accountability from Silicon Valley.

While the tech industry must take more assertive action on moderation, policymakers must also acknowledge that the self-policing model adopted by Facebook, Twitter, Google and Amazon and others is broken. All sides would be better served by the adoption — and vigorous enforcement — of legal norms for online content moderation, incitement and expectations of privacy.

In the long run, this shift will also help Silicon Valley firms manage competing expectations from major global markets, which have often instituted much more aggressive government oversight. The question for the United States is whether the future of the internet runs toward Europe’s community-oriented version or Beijing’s authoritarian-empowerment model.

The threats to national security posed by this information disorder demand more open collaboration between policymakers, the tech industry and the research community. Together, they must accept the fact that the internet ecosystem of right-wing extremism is vast, and that the risk of its exponential expansion on the dark web is substantial.

Nearly a decade ago, former CIA director Leon Panetta’s warning of a “cyber-Pearl Harbor” evoked images of a Russian, Chinese or Al Qaeda-led attack on American infrastructure. But few imagined that American democracy could be taken out by what is effectively a virtual suicide bomb, driven by millions of U.S. citizens exercising their First Amendment rights.

The future of American democracy depends on our defusing that bomb — together.

Candace Rondeaux is a senior fellow with the Center on the Future of War, a joint initiative of New America and Arizona State University. Heather Hurlburt directs New Models of Policy Change at New America