Monday, 4 September 2017

The explosion of online propaganda and the use of our own data to manipulate us

Big data is watching you:
Futures Forum: Big data and big lies...
Futures Forum: A simple guide to algorithms

It has been trying to secure things in Kenya of late:
After Trump, "big data" firm Cambridge Analytica is now working in Kenya - BBC News
Kenya: The Election & the Cover-Up | by Helen Epstein | NYR Daily | The New York Review of Books

With two very substantial pieces from the Guardian/Observer earlier this year:
Did Cambridge Analytica influence the Brexit vote and the US election? | Politics | The Guardian
The great British Brexit robbery: how our democracy was hijacked | Technology | The Guardian

And another in the latest New European:

Mike Hind: Battling the bots in the war of the web

PUBLISHED: 13:46 29 August 2017 | UPDATED: 13:46 29 August 2017



The aims of bots today are the same as they so famously were in the lead-up to the EU referendum and the election of Donald Trump - to manufacture the appearance of social consensus around an issue.

Mike Hind goes on the trail of the army of ‘propaganda bots’ manipulating political debate online, to see how we can fight back

Brexit and Trump may look like unmitigated disasters right now, but there is a silver lining. That’s because both disasters lifted the lid on two alarming issues: the explosion of online propaganda and the use of our own data to manipulate us.
At first glance, you may not see the connection between doing a personality quiz on Facebook, or using a particular app on your phone and noticing certain stuff in your news feed or when you search Google. But they are interwoven and we are now starting to understand how. Academics, lawyers, journalists and citizen investigators are fighting back. We all have a part to play in this and it begins with understanding what is going on.

If you are active on social platforms like Facebook, Instagram and Twitter you’ll be familiar with those fanatical accounts that look like ordinary people peddling hyper-partisan messages on the evils of the EU, impending migrant-inflicted doom for the West and so on. The first thing to understand about this phenomenon is that most of those ‘people’ aren’t real.

At the Oxford Internet Institute, the Computational Propaganda Research Project recently published a series of case studies on the use of ‘bots’ in the spread of political misinformation worldwide. One of the most remarkable nuggets unearthed in the Polish study was an interview with a marketing company which is paid to promote material using networks of fake social media users – or ‘bots’. Learning to spot bots is now widely seen as an essential first step in reducing their effectiveness. But what is a ‘bot’?

“Bot is a term that is thrown around a lot and all it means is an automated computer script working on social media, mimicking human behaviour and engaging with other users,” explains Lisa-Maria Neudert, a researcher at Oxford.

But it’s a mistake to see them as pure ‘robots’, she says, because there is always a human somewhere behind the account, feeding it with the information to publish and the subjects on which to engage with real users. The mix of automation and human involvement also varies between accounts, leading to the researchers dubbing those that feature more human intervention ‘cyborg’ accounts. And we are all fair game for these bots.

“They target anyone on social media, also focusing on opinion leaders, journalists and people we call ‘multipliers’ – those who can be influenced to carry their message further out into the public and political sphere.”

The aims of bots today are the same as they so famously were in the lead-up to the EU referendum and the election of Donald Trump – to manufacture the appearance of social consensus around an issue. That’s why there are so many of them and why they post content so often. And they affect your experience of the online world, whether you interact with them or not.

“It’s effective,” says Neudert, “it shuts people up and makes them less inclined to speak up on social media. And there are indirect effects too, like encouraging distrust in the media, which is now manifesting itself everywhere. You’re even affected whether or not you even see a bot account because the impact is the number of likes you might see on a Facebook post, what you see trending and even what is making it into your own newsfeed.”

This is because the digitally mechanised publishing of stories, which are then shared by thousands of linked bot accounts, consistently fools the social media platforms – and even Google itself – into seeing certain content as especially popular and ‘authoritative’.

Perhaps amusingly, the bots – or their operators – are showing signs of becoming wary of the new scrutiny they’re under from people like Neudert. She recounts how a colleague in Germany spotted a change in behaviour from two major bot networks after the publication of one research paper.

“We had published our criteria for what we would treat as a bot and it included tweeting on a particular hashtag 50 or more times per day. My colleague got in touch to say she’d noticed a couple of bot networks suddenly limiting their tweets to 49 per day.”

Neudert expects the cat and mouse game between researchers and the propagandists to become more complex.

This is why any answer to the problem will be not only technical, but educational in terms of the wider public. The Atlantic Council think tank has established a worldwide collective of academic and citizen investigators known as the Digital Forensic Research Lab. They focus mainly on propaganda relating to armed conflict zones, like Syria, Libya and Ukraine, but their findings are often relevant to the wider study of what is now known as ‘comprop’.

Donara Barojan is a DFR Lab research associate in Latvia who recently revealed an underground economy serving anyone who wants to create a powerful online propaganda network. Barojan looked into the ‘dark web’ – the part of the internet not indexed by Google. Naturally, this secretive environment is a hotbed of illicit activity. What Barojan found was a burgeoning eBay or Amazon-style marketplace called Hansa Market (since taken offline by the authorities), where sellers offered digital products and services, including bots.

On Hansa all your propaganda network needs were served by sellers like ‘DigitalPablo’, who enjoyed rave reviews for products like 2,000 high quality USA-based Twitter retweets for just over six bucks (or 0.0025 bitcoin) or 10,000 followers (i.e. bots who will follow you, like and share your tweets) for a bargain $22.50 (0.0090 bitcoin). DigitalPablo and his, her or their competitors also offered a comprehensive range of similar products helping you gain a bigger profile on Facebook, YouTube, Instagram and other social media sites.

Barojan is part of the effort to bring the world of digital disinformation to a wider public and was recently involved in training 80 journalists at DFR Lab’s latest event, in Warsaw, teaching how to surface the facts behind fake news and fake people online.

“We want to popularise digital forensics as a media literacy skill. There are many tools that are freely available that can be used to establish the truth or not of many things. All you need is the curiosity and some free time.”

She says Brexit and Trump have led to a huge surge of interest among journalists, everyday citizens and politicians in the mechanisms underpinning the spread of fake information online.

Although there are increasingly loud calls for organisations like Google, Facebook and Twitter to “do something” about the issue it’s interesting that Barojan emphasises that education is the answer. She says: “Snopes (the fact checking site) was established in 1994 so hoaxes and misleading stories are not a new problem. People say we live in a post-truth society but society has never been completely transparent. The difference now is whether we want to live in a society which has complete disregard for the truth.

“Rather than legislation and regulation, the answer is digital media literacy. Society will be more resilient the more people actively question what they are seeing. We are currently not spending enough time in schools teaching people about this.”

The impact of disinformation – and the misuse of real information – is coming onto the radar of politicians. As a shadow minister, whose brief covers Business, Innovation, Culture and Media, Labour’s Chi Onwurah is especially concerned about protection for those of us on the receiving end of the bots’ messages – and how our personal data is used to target us.

The recent stories of personal data being used to help manipulate people into voting one way – or not voting at all – centring on Cambridge Analytica are just the tip of a very large iceberg. Onwurah recently shared a platform with lawyer Ravi Naik, discussing how technology and the proliferation of personal data on all of us appears to be outstripping the protections of legislation and regulation.

Onwurah says: “The problem is that we haven’t had anything like a coherent informed debate about the impact big data, data mining and all those very predictable and relatively well understood technology trends are having on citizens, business, civic and political life.”

Onwurah argues that the failure to update the 2003 Communications Act, in the face of such rapid change, makes “the 2013 Data Act the most important Act that never was”.

“Things are popping up all over the place, like issues with medical data and what Facebook, Google and Amazon are doing with your data and we don’t have even the beginnings of a coherent regulatory framework to deal with it.

“There’s definitely a lack of understanding in government, but more disturbingly a will to have a kind of free-for-all and to make this a private sector question that government cannot address. The government is afraid to address data because it’s afraid of the vested interests and establishing more rights for consumers and citizens.

“My first priority would be to give people more clear rights over control of their data and what ownership of it needs to mean. I’m a great believer in technology and I believe that when people have more control and agency they can start finding solutions for themselves.”

In the meantime Naik sees echoes of a new civil rights movement in the way citizens are beginning to ask organisations like Cambridge Analytica’s parent company SCL Elections what data the businesses hold on them. He too believes the government is in thrall to a view of our data as a free market commodity for anyone who wants to exploit it.

He says: “There is a libertarian view on the unrestrained and unregulated flow of personal data which has been coined as ‘Dataism’ and it’s now seen in some quarters as a question of human rights getting in the way of data rights for businesses and organisations.

“How is my data being used and how is it controlled? These are the key questions about data protection as a whole and that is what we’re empowering people to find out.”

He points out that most of our regulations and protections stem from the General Data Protection Regulations currently covering EU member states, or are based on interpretation of European law. Brexit could, in his view, lead to moves in some quarters to remove those protections.

He is now calling on everyone who is interested in what personal data is held on them – and how it is being used or where it is being sold on – to make what is called a ‘subject access request’ of any organisation they wish to.

Naik is joined in this call by Observer journalist, Carole Cadwalladr, who has led the mainstream media field in raising questions about the connections between the main Brexit campaigns last year and the related companies of Cambridge Analytica and Aggregate IQ.

Cadwalladr’s revelations so far have been a must-read for anyone interested in whether or not foreign-based ‘big data miners’ had an impact on the referendum outcome. The Electoral Commission and the Information Commissioner are still investigating, but throw in the fact that some of the companies in question are owned by Donald Trump’s biggest donor and it’s clear answers are needed.

She is now leading calls for a public enquiry into the conduct of the referendum by the various pro-Brexit organisations and sees greater public awareness of the issues as key. Cadwalladr explains: “The CPS and police have kicked it back to the Electoral Commission, who in turn have been fobbed off by being told their questions are subject to non-disclosure agreements.

“Lawyers I’ve consulted say the best way we can get to the truth is through a public enquiry and I’m looking at ways to make that happen. The government will only act if there is sufficient moral outrage but interest is growing. People are starting to ask who has their data and what is being done with it. With a few exceptions, like Chi Onwurah, the message hasn’t really got through yet but the more people act on their concerns the more it will trickle through to politicians.”

Because the avalanche of largely right wing and populist misinformation can sometimes seem overwhelming, it’s easy to succumb to the idea that all is lost for the liberal centre. Not so. Networks of people are learning how to fight back. They may be coming from behind, after losing two big battles, but in terms of understanding the landscape has transformed in a very short time.

What all of the people interviewed for this feature stress is the power of us all as individuals to get involved so that the fight back can build momentum. Our choice is to ignore what’s happening and hope someone else will sort it out – or take action. In the fight against computational propaganda the stake is democracy itself. That makes the effort of a subject access request for your data, or learning some simple techniques to prove that a story is fake, a pretty good investment.

Mike Hind (@MikeH_PR) is an independent journalist, PR and marketing consultant


Mike Hind: Battling the bots in the war of the web - Top Stories - The New European
.
.
.

No comments:

Post a Comment