I do not have time for this now and so will say little apart from bookmarking items for future return. I would like to have a better notion of what systems of incentives we might submit ourselves to in order to gbe able to make a better claim that our journalism is informing us about the world in the manner in which we need it. This is related to the parallel problem in academia of how peer review should work. In the expensive but essential task of verifying and prioritising news and events for public decision making, how can we make it happen the most efficiently? Can the profit centres be shifted, e.g. from raving social media conspiracy theory and pile-ons to investigation and debate?
Here endeth the framing. Now cometh some stuff.
Derek Debus, The Death-Rattle of Journalism has a nice punchy explication which I would like to bookmark for its slightly macho posturing vibe. This tracks well with some people.
Yet its not just the rise of bad-faith shills of unreliable information that is to blame for this. It’s no secret newspapers are dying: facts expensive, opinions are cheap. So media titans started pushing more and more op-eds and making it less and less clear that they were opinions and not news. The Media became less interested in publishing accurate information and more interested in getting that sweet, sweet ad revenue.
Nathan J. Robinson, The Truth Is Paywalled But The Lies Are Free
Ethan Zuckerman, The Case for Digital Public Infrastructure
Harnessing past successes in public broadcasting to build community-oriented digital tool
Not a complete solution, but there are some interesting ideas in scroll which bundles ad-less subscriptions to online media for readers.
The latest case study in media analysis is the COVID-19 media response. This was especially maladaptive in the USA I am told. Adam Elkus, The Fish rots from the head gives good summary/analysisfirms.
It is an exaggeration to say that fringe weirdos on social media often were more well-informed than people that exclusively evaluated mainstream sources, but not that much of an exaggeration as most would think. And that is not accidental. As Ben Thompson noted, the global COVID-19 response depended on an enormous amount of information developed and shared often in defiance of traditional media (which underrated and even mocked concern about the crisis) and even the Center for Disease Control (which attempted to suppress the critical Seattle Flu Study). The response still depends primarily on transnational networks and often must operate around rather than through official channels.
Taken together, all of this is astounding in both its scope and simultaneity. And it makes a mockery out of the cottage industry developed over the last few years to preserve our collective epistemic health.
Analysts obsessed for years and years over the threat of Russian bots and trolls and Macedonian teenagers to democratic institutions and public life, arguing that misinformation and propaganda spread via social networks would perturb the very fabric of reality and destroy the trust and cohesion necessary for liberal democracy to survive. This concern was responsive to the surface elements of deeper psychological and cultural changes, but it often was hindered by its emphasis on top-down control of computational platforms that eluded control at subjectively appropriate cost. Nonetheless, reasonable people could disagree about the response to the problem but not the actual implicit diagnosis. The diagnosis being that the unraveling of legacy institutions and their capacity to enforce at least the fiction of consensus over underlying facts and values about democratic authority was dangerous and no effort should be spared to fight it.
But as we have seen, these institutions are perfectly capable of unraveling themselves without much help from Russian bots and trolls and Macedonian teenagers. And if the fish rots from the head, then the counter-disinformation effort becomes actively harmful. It seeks to gentrify information networks that could offer layers of redundancy in the face of failures from legacy institutions. It is reliant on blunt and context-indifferent collections of bureaucratic and mechanical tools to do so. It leaves us with a situation in which complicated computer programs on enormous systems and overworked and overburdened human moderators censor information if it runs afoul of generalized filters but malicious politicians and malfunctioning institutions can circulate misleading or outright false information unimpeded. And as large content platforms are being instrumentalized by these same political and institutional entities to combat “fraud and misinformation,” this basic contradiction will continue to be heightened.
Barry Schnitt has the easy job of complaining from the outside about Facebook (therefore anything he says is ambiguously ‘concern trolling’), but his descriptions of the incentives is tidy.
It has been said that a lie gets halfway around the world before the truth has a chance to put its pants on. Now, Facebook’s speed and reach make it more like a lie circles the globe a thousand times before the truth is even awake. This is no accident. Ironically, the one true conspiracy theory appears to be that malevolent nation-states, short-sighted politicians, and misguided interest groups are using conspiracy theories to deliberately misinform the public as a means of accomplishing their long-term strategic goals. The same could be said for those deliberately using incendiary and divisive language, which is similarly allowed to propagate your system.
Unfortunately, I do not think it is a coincidence that the choices Facebook makes are the ones that allow the most content— the fuel for the Facebook engine— to remain in the system. I do not think it is a coincidence that Facebook’s choices align with the ones that require the least amount of resources, and the choices that outsource important aspects to third parties. I do not think it is a coincidence that Facebook’s choices appease those in power who have made misinformation, blatant racism, and inciting violence part of their platform. Facebook says, and may even believe, that it is on the side of free speech. In fact, it has put itself on the side of profit and cowardice.
Jonathan Stray, What tools do we have to combat disinformation?
Lead Stories uses the Trendolizer™ engine to detect the most trending stories from known fake news, satire and prank websites and tries to debunk them as fast as possible.
(I think this is publicity/loss leader for Trendolizer, a media buzz product.)
Undark is a non-profit, editorially independent digital magazine exploring the intersection of science and society. It is published with generous funding from the John S. and James L. Knight Foundation, through its Knight Science Journalism Fellowship Program in Cambridge, Massachusetts.
A recent scandal involving Undark shows why independent journalism is not enough; accountable journalism is required.
Crypto and journalism somehow
The (now defunct) Civil tried some exotic cryptocurrency-based options here.
We build tools to help readers discover and support trusted journalists around the world on a decentralized platform.
One of these tools was a blockchain instrument to publish, preserve and… judge(?) journalists. The explanation of how that latter part was supposed to work was never clear to me and I meant to look into it but they folded before I did.
Erasurebay comes from the other direction with respect to blockchainery. They are a blockchain company who wants to move into information brokerage business. Their explanation is characteristically abstruse nerdview so I can’t imagine this going far until they hire someone with a communications degree to produce comprehensible prose for them.
Activist short sellers
Short-selling firms like Hindenburg Research, Viceroy Research, and Muddy Waters have earned outsize attention—and big windfalls—by shining a harsh light on overhyped companies.
Founded in 2017, Hindenburg specialises in publishing detailed reports about publicly traded companies, poking holes in their stories and alerting investors to potential malfeasance. The boom in special purpose acquisition companies has provided Hindenburg with fertile ground.
It is not an act of public service. Hindenburg, which has the backing of several investors, also makes financial bets that the stocks of the companies Mr Anderson is targeting will fall after the firm issues its research. When the stocks do fall, Hindenburg makes its money in what is called a "short" trade.
It is easy to find things to like about this model. and also easy to imagine it descending into infowars.
Hindenburg Research has a great business model:
- Investigate companies
- …until they find one that is committing fraud
- Short the fraudulent company
- Publicly reveal the fraud
- Company’s stock goes down
This is indeed interesting. Although a cost benefit analysis of finding actual fraud versus accusing them of fraud and shorting them anyway would need to be done before I backed such a business. Also if you had a business that weaponized baseless fraud accusations I might be able to sell that service to competitors for some extra cash flow.
Slate star codex kerfuffle
Would you like and interesting core sample of current media dynamics? Try the Slate Star Codex kerfuffle, which is an interesting case study.