Why the West is blind to Russian propaganda
Fake news: Why the West is blind to Russia's propaganda today
Chris Zappone
CONTACT VIA EMAIL
FOLLOW ON FACEBOOK
FOLLOW ON GOOGLE PLUS
FOLLOW ON TWITTER
90 reading now
FACEBOOKSHARE
TWITTERTWEET
EMAIL
Russia has skilfully exploited social media to divide the West and increase Moscow's power in Europe, the US and eventually Asia.
The use of social media as a platform to divide democracies works, in part, because the strategy preys on a fundamental blind spot in open societies: the origin and volume of voices taking part in an online discussion.
SHOCKING: 'Fake News' selected word of the year
You're never going to believe what the Macquarie Dictionary has chosen as its word of the year...
Western countries, inventors of the internet and social media platforms like Facebook and Twitter, tend to see discussion on social media as an open reflection of the public's views.
That
very openness means outside voices can weigh into debates β not to broaden the discussion, but to co-opt arguments and redirect them toward conclusions that undermine Western society and government.
RELATED CONTENT
Meet the Czechs fighting back against Russia's (dis)information war
Vladimir Putin supervised Russia cyber attacks in US election: officials
Propaganda can be crystallised into hashtags, and the meaning can be warped to cloud the understanding of a subject or trash the reputation of ideas, parties or figures.
In this world, the opposite of free trade is not protectionism but "anti-globalism".
Liberals are all "neo-liberals". A vote for
Trump will "avoid war with Russia".
Hillary becomes "Killary".
directly to voters or indirectly, to the media.
On social media, bots (short for robots) are small programs that automate the posting of and reply to messages. Political campaigns in democracies have used bots in recent years, not always achieving the effects they desired.
toward Western election campaigns.
For example, before Britain's referendum on European Union membership in June, about 1 per cent of Twitter profiles generated one-third of traffic on the issue, according
to the Computational Propaganda project at the Oxford Internet Institute.
"Most of those were bot accounts," says Samuel Woolley, director of research of the project. "They were tweeting automatically and they generated tons of content."
The
increased use of bots can elevate awareness of an idea, cause distrust or just as likely confuse or muddle a political debate.
The impact of interaction between bots and humans is not always clear, Woolley says. But it is clear that bots can be used to get key words to trend, representing a huge back door to Western media for outside forces.
Facebook, Google and Twitter trending algorithms are pretty much based on numbers, Woolley says, "and bots massively drive up the algorithms".
"The algorithms end up getting gamed," he said.
Trending terms, says Woolley, help journalists figure out "what to report on, what are the most important things surrounding the debate, which hashtags are being used, and why and by who". And those who control what's trending β through bots, through a co-ordinated social media presence β can help steer the larger conversation in media.
Technology used in a certain way, in other words, can dramatically amplify the volume of an argument, view, voice or ideology.
This was true,
especially with groups like the "alt-right", Woolley said, which were effective in making themselves appear much more popular than they were in reality.
Alt-right figures, in addition to promoting a racist world view, tend to parrot Russian views on global affairs, which itself is a feature of an internationalised movement.
Woolley says his research showed bots located in the US, Russia, but also Japan active in trying to influence the US election.
Woolley says the increasing sophistication of bots means it is harder to determine if they are human or a hybrid of scripts mixed with human interaction.
Invisible
Another way
the West is blind to manipulation is the dissemination of real news, through real journalists, for strategic effect, beyond the journalists' ability to see it.
After the Democrats were hacked by Russian outfits, the stolen material was fed back to eager reporters as "exclusives", with hacker "fronts" such as
DCLeaks,
Guccifer 2.0 being used as "PR agencies".
Within the context of Western media, journalists snapping up "exclusive leaks" is normal. But as with so much related to technology, the volume of these "exclusives" can be scaled up dramatically.
Over time, the persistence of the same theme in these news stories β in this case, Hillary Clinton's emails β on networked media creates a surrounding effect for users. The stream of stories contain different details but convey the same theme β that Clinton was wildly morally corrupt.
US intelligence agencies see Russia behind this effort to denigrate Clinton and the Democrats. But such efforts aren't new.
In 2009, for example, just before a crucial global conference on climate change,
emails hacked from the University of East Anglia in Britain were posted on a server in Russia, with links sent to right-wing climate change-sceptic sites.
Hacks plus spin
The bloggers misconstrued the emails in publicising them, setting off a controversy that helped fuel the false perception that climate scientists had hid important information from the public.
One of the same Russian hacking outfits blamed for stealing Democrats' emails in 2016, stole and published
medical information on Serena and Venus Williams, the tennis stars, and gymnast Simone Biles, that suggested, erroneously, the athletes had received exceptions to the rules of the World Anti-Doping Agency.
In the few hours that the story went unchallenged, Russia's media pounced, pointing to Western "hypocrisy" after Russia's teams had been accused of serious doping offences.
In fact, coordination between those stealing the documents and the sometimes-fringe media that publicise them is an important element of such propaganda. It is in the public domain first - and contextualised afterward. And who gives it context in this environment largely determines how the information is seen. This is how the mostly quotidian correspondence of the US Democrats could be shared as proof of Hillary Clinton's supposedly nefarious activities.
And then, in the too-quick-to-verify pace of online news, there is also the possibility of inauthentic documents being mixed in with the real ones.
When
Russian hackers broke into the pro-democracy billionaire George Soros' Open Society Foundations, and stole documents which they published, they included doctored versions of papers that purported to show a connection between
Soros and Russian anti-corruption activist Alexei Navalny. It wasn't true.
The ease of recycling hacked material into the media has "exposed a frightening vulnerability in our society", wrote
Joshua Foust, an intelligence analyst and journalist.
"The worst gossip-chasing tendencies in the media and the lackadaisical security of many legacy email systems have created a perfect storm.
"It isn't clear how to characterise these attacks (Are they cyber? Propaganda? Something new?), so it isn't clear which agency should be in charge of coordinating a response β or even if a response is possible," Foust wrote in
War on the Rocks.
In this area, free speech and the public's right to know give cover to crippling attacks on the legitimacy of public figures and institutions.
Top hashtags related to 'globalism'. Notice the overlap with NWO, or 'New World Order', a term favoured by conspiracy theorists. Source: Hashtagify