facebook-pixel

Philip Bump: U.S. political conversation is not and probably never was driven by Russian social-media bots

Hundreds of thousands of dollars in your bank account is a lot. Hundreds of thousands of tweets in a month is not.

FILE - In this May 16, 2012, file photo, the Facebook logo is displayed on an iPad in Philadelphia. Senators are moving to boost transparency for online political ads, unveiling on Oct. 19, 2017, what could be the first of several pieces of legislation to try to lessen influence from Russia or other foreign actors on U.S. elections. (AP Photo/Matt Rourke, File)

Over the past several weeks, automated social-media accounts powered by Russian actors have been blamed for 1) driving the push to release Rep. Devin Nunes’ (R-Calif.) memo about the FBI; 2) forcing former Minnesota senator Al Franken from office; and 3) flooding the public conversation in the wake .of the mass shooting in Florida. Oh, and No. 4: Special counsel Robert S. Mueller III unveiled a number of federal charges targeting Russian individuals who engaged in deceptive online activity during and after the 2016 campaign with the apparent goal of bolstering President Donald Trump’s candidacy and sowing division in the United States.

It’s easy to read that litany and uncritically assume that our politics are guided by an insidious foreign hand. So many things of such significance tainted by Russian actors? What, after all, can we actually trust?

The problem with that list is that it is alarmist both as presented and, to a lesser extent, as reported. There simply isn’t any evidence that bots linked to the Russian government had or have a significant effect on the U.S. political conversation.

Consider this tweet from former Bush administration press secretary Ari Fleischer. He links to a comment made by a vice president at Facebook who notes that the Russian activity on the social network was more about divisiveness than about Trump, with more than half of the ad spending coming after the election.

Fleischer tweeted “Informative thread: From Facebook’s VP of sales. He says most of the ads bought by Russia were purchased after the election — for the purpose of dividing us. It worked.”

The two words that undercut Fleischer’s tweet are the last two: “It worked.”

We looked at the indictment from Mueller’s team Friday. It included efforts to launch real-world events as well as social-media posts and ads. The ads included taglines like “Donald wants to defeat terrorism … Hillary wants to sponsor it,” and ”#NeverHillary #HillaryForPrison #Hillary4Prison #HillaryForPrison2016 #Trump2016 #Trump #Trump4President.” Hillary Clinton at the time was expected to win the presidential election; a central Russian goal was apparently to make that victory as close as possible and to have a cloud of skepticism follow Clinton into the White House.

But reading the ads included in the indictment and looking at other ads released publicly by Facebook, it’s hard to come away with the sense that these were decision-makers for many voters. It’s often hard to measure the effectiveness of political advertising, but these ones seem particularly mediocre. If your goal is to mix things up and frustrate people, the bar for success is lower.

What Fleischer’s tweet implies is that Russian actors successfully sowed division in the United States. That’s incorrect. Russia’s efforts reflected and tried to leverage existing divisions.

Pew Research measured political animosity in June 2016, finding that more than 4 in 10 partisans viewed members of the other party as a threat to the United States. That survey came before many of the more provocative actions taken by the Russians and reflects a long-term trend of growing partisan frustration. America was already divided; the Russians appear to have tried to make that gap wider.

There’s not really much reason to think those efforts were very successful. The scale of Russia’s ad buys — and the other apparent Russian efforts before and after the election — is tiny. Data released by the House Intelligence Committee show that Russian ads were viewed only 340,000 times in the last month of the campaign. There were 231 million monthly active Facebook users in North America in the fourth quarter of 2016, meaning that perhaps one-tenth of 1 percent of users saw Russian ads that month — assuming that all of those views were in North America (not a safe assumption) and that no one saw one of the ads more than once (also not a safe assumption).

When we talk about “flooding” social media, the same scale applies. The New York Times wrote that in the hour after news broke of the shooting in Parkland, Fla., last week, “Twitter accounts suspected of having links to Russia released hundreds of posts taking up the gun control debate.” The Times didn’t use the term “flooding,” but Wired did.

Hundreds of tweets is not a “flood.” It’s unclear how many tweets there are each day in the United States, but it’s safe to say that there are more than 100 million. Assuming those tweets are distributed evenly over the course of a day (which they aren’t), 1,000 tweets is 0.02 percent of an hour’s total.

If anyone saw those tweets. Russian bots (and accounts powered by real Russian trolls) tweet a lot but don’t often have a lot of real followers. Getting real followers is a lot harder than getting fake ones. So a bot throwing a tweet out into the void doesn’t mean it will ever be seen. Twitter has provided an answer to the koan: If you tweet and no one is around to see it, it’s not heard.

As evidence, we can look at that report about how bots tried to push for Franken’s resignation, a story picked up by Newsweek and the liberal site Raw Story. The evidence for the bots affecting what happened to Franken? The bots started tweeting an article undercutting Franken politically but which pointed back to two newly created websites.

“The bot accounts normally tweeted about celebrities, bitcoin and sports, but on that day, they were mobilized against Franken,” Nina Burleigh wrote for Newsweek. “Researchers have found that each bot account had 30 to 60 followers, all Japanese. The first follower for each account was either Japanese or Russian.”

The author of that article weighed in on Twitter. Precisely none of the traffic the article saw came from the sites the bots were pushing. Why were the bots pushing it? One possible reason is that the Franken story was big news and the newly created sites were hoping that people searching Twitter for Franken news would come to their site — and see the dozens of ads littering it.

Bots do clearly try to game Twitter’s built-in tools for attention. At Politico, Molly McKew of New Media Frontier argues that bots pushing the ”#ReleaseTheMemo” hashtag helped propel that hashtag into national prominence, shifting the debate over Nunes’ memo alleging misbehavior by the Trump administration. McKew notes how prominent conservatives like Sean Hannity eventually picked up the hashtag (which, we’ll note, was originally tweeted by a real person) as did the president’s son.

McKew doesn’t demonstrate unequivocally that the bot activity led directly to the hashtag’s usage among high-profile conservatives, that key accounts using it were actually bots or that the hashtag wouldn’t have been as successful had the bots not weighed in. Does anyone really think, though, that it was the existence of the hashtag that spurred Trump to actually release the memo? Hannity’s television show and other prominent members of the conservative media were also arguing that Nunes’ memo showed bias against Trump. The hashtag may have been an organizing tool, but the nature of the issue itself is clearly what spurred Trump’s enthusiasm.

The pattern is similar to 2016: There were already a lot of people who supported Trump and releasing the memo and the Russians may (may) have tried to reinforce that position. Whether those efforts were successful is another question entirely.

Data from a project called Hamilton68 are often cited as evidence of the Russian bots’ pervasive presence. On Feb. 14, the day of the shooting in Parkland, the bots tracked by Hamilton68 tweeted 18,000 times — 0.02 percent of the estimated 100 million daily tweets. The most discussed subject in the past 48 hours, as of writing, was Syria. The bots tweeted about it 209 times over the past 48 hours. That’s 0.0001 percent of all tweets over that time period.

People are bad at scale when it involves figures in the millions. We didn’t evolve to need to comprehend how 100,000 compares to 1,000,000 and, to us, “hundreds” seems like “a lot” almost regardless of context. Hundreds of thousands of dollars in your bank account is a lot. Hundreds of thousands of tweets in a month is not.

The reason Coke runs ads is to remind people of a message they’ve heard hundreds of times in the past. No one Coke ad is probably the reason you may have gone and bought a Coke recently. What Russia seems to be trying to do on social media is, figuratively speaking, to throw out the occasional Coke ad. It’s not a flood of Coke ads, especially compared with all of Coke’s ads, much less with all advertising in general.

If those ads prompted anyone to buy a Coke, that purchase was also not probably a big part of Coca-Cola’s annual sales.

Philip Bump | The Washington Post

Philip Bump is a correspondent for The Washington Post based in New York.