Memetic information warfare on the social information graph with viral media for the purpose of human behaviour control. The other side to trusted news; hacking the implicit reputation system of social media to suborn factual reporting, or to motivate people to behave to suit your goals, to, e.g. sell uncertainty.
Research in this area is plague by many unknowns possibly because our tools of causality on social graphs are weak and it is hard, or perhaps because the tools that some of us have are really good but people with really good tools to control the public are not going to mention that. But we can get a long way! see Media virality for some models that we can use.
But for now, here is some qualitative journalism from the front lines.
Coscia is always low-key fun in this domain: News on Social Media: It’s not Real if I don’t Like it.
Toxic-social-media doco The Social Dilemma was a thing, although not a thing I got around to watching. Was it any good?
Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble.
But ultimately the information war is about territory — just not the geographic kind.
In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics. […] The key problem is this: platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors.
Michael Hobbes, The Methods of Moral Panic Journalism
Master List Of Official Russia Claims That Proved To Be Bogus lists some really interesting incidents in the Trump-administration era media reporting, that make the entire press corps look pretty bad.
[…] every product, brand, politician, charity, and social movement is trying to manipulate your emotions on some level, and they’re running A/B tests to find out how. They all want you to use more, spend more, vote for them, donate money, or sign a petition by making you happy, insecure, optimistic, sad, or angry. There are many tools for discovering how best to manipulate these emotions, including analytics, focus groups, and A/B tests.
- parasocial interaction, the way our monkey minds regard remote celebrities as our intimates.
Sophie Zhang, I saw millions compromise their Facebook accounts to fuel fake engagement, raises an interesting point, which is that people will willingly put their opinions in the hands of engagement farms for a small fee. In this case it is selling their logins, but it is easy to interplolate a continuum from classic old-style shilling for some interest and this new mass-market version.
As Gwern on points out, Littlewood’s Law of Media implies the true anecdotes we can, in all truthfulness, recount grow increasingly weird. In a large enough sample you can find a small number of occurrences to support any hypothesis you would like.
[This] illustrates a version of Littlewood’s Law of Miracles: in a world with ~8 billion people, one which is increasingly networked and mobile and wealthy at that, a one-in-billion event will happen 8 times a month.
Human extremes are not only weirder than we suppose, they are weirder than we can suppose.
But let’s, for a moment, assume that people actually have intent to come to a shared understanding of the facts reality, writ large and systemic. Do they even have the skills? I don’t know, but it is hard to work out when you are being fed bullshit and we don’t do well at teaching that. There are courses on identifying the lazier type of bullshit
and even courses on more sophisticated bullshit detection
Craig Silverman (ed), Verification Handbook For Disinformation And Media Manipulation.
Will all the billions of humans on earth take such a course? Would they deploy the skills they learned thereby even if they did?
And, given that society is complex and counter-intuitive even if we are doing simple analysis of correlation, how about more complex causation, such as feedback loops? Nicky Case has a diagrammatic account of how “systems journalism” might work.
Nick Chater, Would you Stand Up to An Oppressive Regime.
Welcome! This is an online resource guide for civil society groups looking to better deal with the problem of disinformation. Let us know your concerns and we will suggest resources, curated by civil society practitioners and the Project on Computational Propaganda. gear
The previous organisation I found via Data Skeptic Podcast’s Fake News Series
An amusing portrait of snopes
Unfiltered news doesn’t share well, not at all:
- It can be emotional, but in the worse sense; no one is willing to spread a gruesome account from Mosul among his/er peers.
- Most likely, unfiltered news will convey a negative aspect of society. Again, another revelation from The Intercept or ProPublica won’t get many clicks.
- Unfiltered news can upset users’ views, beliefs, or opinions.
Tim Harford, The Problem With Facts:
[…] will this sudden focus on facts actually lead to a more informed electorate, better decisions, a renewed respect for the truth? The history of tobacco suggests not. The link between cigarettes and cancer was supported by the world’s leading medical scientists and, in 1964, the US surgeon general himself. The story was covered by well-trained journalists committed to the values of objectivity. Yet the tobacco lobbyists ran rings round them.
In the 1950s and 1960s, journalists had an excuse for their stumbles: the tobacco industry’s tactics were clever, complex and new. First, the industry appeared to engage, promising high-quality research into the issue. The public were assured that the best people were on the case. The second stage was to complicate the question and sow doubt: lung cancer might have any number of causes, after all. And wasn’t lung cancer, not cigarettes, what really mattered? Stage three was to undermine serious research and expertise. Autopsy reports would be dismissed as anecdotal, epidemiological work as merely statistical, and animal studies as irrelevant. Finally came normalisation: the industry would point out that the tobacco-cancer story was stale news. Couldn’t journalists find something new and interesting to say?
[…] In 1995, Robert Proctor, a historian at Stanford University who has studied the tobacco case closely, coined the word “agnotology”. This is the study of how ignorance is deliberately produced; the entire field was started by Proctor’s observation of the tobacco industry. The facts about smoking — indisputable facts, from unquestionable sources — did not carry the day. The indisputable facts were disputed. The unquestionable sources were questioned. Facts, it turns out, are important, but facts are not enough to win this kind of argument.
Conspiracy theories and their uses
see Conspiracy mania.
Sea-lioning is a common hack for trolls, and is a whole interesting essay in strategic conversation deraillment strategies. Here is one strategy against it, the FAQ off system for live FAQ. This is one of many dogpiling strategies that are effective online, where economies of scarce attention are important.
How do evaulate the effects of social media interventions? Of course, standard survey modelling.
There is some structure to exploit here, e.g. causalimpact and other such time series-causal-inference systems. How about when the data is a mixture of time-series data and one-off results (e.g. polling before and election and the election itself)
Getting to the data is fraught:
Facebook’s Illusory Promise of Transparency ish currently obstructing the Ad Observatory by NYU Tandon School of Engineering.
Various browser data-harvesting systems exist:
Recommendation rabbit holes
For now see clickbait bandits.)
Automatic trolling, infinite fake news
GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets.
It takes 5 minutes to download this package and start generating decent fake news; Whether you gain anything over the traditional manual method is an open question.
Assembling these into a twitter bot farm is left as an exercise for the student.
Post hoc analysis
David Gilbert, YouTube’s Algorithm Keeps Suggesting Users Watch Climate Change Misinformation The methodology here looks, at a glance, shallow but not implausible.
Craig Silverman, Jane Lytvynenko, William Kung, Disinformation For Hire: How A New Breed Of PR Firms Is Selling Lies Online
One firm promised to “use every tool and take every advantage available in order to change reality according to our client’s wishes.”
Kate Starbird, the surprising nuance behind the Russian troll strategy
Dan O’Sullivan, Inside the RNC Leak
In what is the largest known data exposure of its kind, UpGuard’s Cyber Risk Team can now confirm that a misconfigured database containing the sensitive personal details of over 198 million American voters was left exposed to the internet by a firm working on behalf of the Republican National Committee (RNC) in their efforts to elect Donald Trump. The data, which was stored in a publicly accessible cloud server owned by Republican data firm Deep Root Analytics, included 1.1 terabytes of entirely unsecured personal information compiled by DRA and at least two other Republican contractors, TargetPoint Consulting, Inc. and Data Trust. In total, the personal information of potentially near all of America’s 200 million registered voters was exposed, including names, dates of birth, home addresses, phone numbers, and voter registration details, as well as data described as “modeled” voter ethnicities and religions. […]
“‘Microtargeting is trying to unravel your political DNA,’ [Gage] said. ‘The more information I have about you, the better.’ The more information [Gage] has, the better he can group people into “target clusters” with names such as ‘Flag and Family Republicans’ or ‘Tax and Terrorism Moderates.’ Once a person is defined, finding the right message from the campaign becomes fairly simple.”
Businessweek, which published a major look into the campaign this morning, explains how the Trump team has quietly organized a data enterprise to sharpen its White House bid. According to the magazine, the campaign is meanwhile attempting to depress votes in demographics where Hillary Clinton is winning by wide margins.
Parscale was given a small budget to expand Trump’s base and decided to spend it all on Facebook. He developed rudimentary models, matching voters to their Facebook profiles and relying on that network’s “Lookalike Audiences” to expand his pool of targets. He ultimately placed $2 million in ads across several states, all from his laptop at home, then used the social network’s built-in “brand-lift” survey tool to gauge the effectiveness of his videos, which featured infographic-style explainers about his policy proposals or Trump speaking to the camera. “I always wonder why people in politics act like this stuff is so mystical,” Parscale says. “It’s the same shit we use in commercial, just has fancier names.”
But what’s often overlooked in press coverage is that ISIS doesn’t just have strong, organic support online. It also employs social-media strategies that inflate and control its message. Extremists of all stripes are increasingly using social media to recruit, radicalize and raise funds, and ISIS is one of the most adept practitioners of this approach.
The Israel Defence Forces have pioneered state military engagement with social media, with dedicated teams operating since Operation Cast Lead, its war in Gaza in 2008-9. The IDF is active on 30 platforms — including Twitter, Facebook, Youtube and Instagram — in six languages. “It enables us to engage with an audience we otherwise wouldn’t reach,” said an Israeli army spokesman. […] During last summer’s war in Gaza, Operation Protective Edge, the IDF and Hamas’s military wing, the Qassam Brigades, tweeted prolifically, sometimes engaging directly with one another.
An internal Facebook report presented to executives in 2018 found that the company was well aware that its product, specifically its recommendation engine, stoked divisiveness and polarization, according to a new report from The Wall Street Journal. “Our algorithms exploit the human brain’s attraction to divisiveness,” one slide from the presentation read. The group found that if this core element of its recommendation engine were left unchecked, it would continue to serve Facebook users “more and more divisive content in an effort to gain user attention & increase time on the platform.” A separate internal report, crafted in 2016, said 64 percent of people who joined an extremist group on Facebook only did so because the company’s algorithm recommended it to them, the _WSJ _reports.
Leading the effort to downplay these concerns and shift Facebook’s focus away from polarization has been Joel Kaplan, Facebook’s vice president of global public policy and former chief of staff under President George W. Bush. Kaplan is a controversial figure in part due to his staunch right-wing politics— he supported Supreme Court Justice Brett Kavanaugh throughout his nomination— and his apparent ability to sway CEO Mark Zuckerberg on important policy matters.
Ray Serrator documents the kind of dynamics that we should be aware of here. One false-flag tweet circulated by partisans gets far more exposure as a evidence of the vileness of the people it purports to come from than does the belated take-down of that tweet.
Incoming think pieces
- Fact-Checking is Table Stakes
- Monetize Alternative Facts!
- Joan Donovan, Research Director of Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy, How Civil Society Can Combat Misinformation and Hate Speech Without Making It Worse.
- Facebook Is a Doomsday Machine
- Social penumbras predict political attitudes
- Twelve Angry Robots, Or Moderation And Its Discontents - Adam Elkus
- The Shanley Show: Was The Whole Thing An Elaborate Hoax?
- How the Far Right in Italy Is Manipulating Twitter and Discourse