Spontaneous order, local knowledge, strategic belief, and other castings of the relationship of beliefs and knowledge in the social order. I have no original thoughts on this, but I like to keep links on this theme where I can see them so that they don’t bite me.
The function of belief in individuals
David Banks’s diatribe depicts a particular kind of strategic belief:
“[Radiolab recasts] the political as endlessly unresolved scientific controversies, and act as science concern trolls,” he claims. These “explainerist” nuggets of satisfying factiness - why are they popular? One answer might be that they are a good marker of membership in a tribe that likes a certain kind of cocktail conversation.
What kind of beliefs prosper in society? What is the function of our truth claims? When should you believe “true” things, and what are true things anyway? Are true things about the objects of science the same as true things about society?
Goal: find a way of navigating the pragmatic functions of belief that sidestep the divisions in this anecdote:
I know this sounds like a story from some bad conservative novel, but it is not unheard of for rooms full of PhDs to applaud when someone says that, for example, witchcraft is just another way of knowledge and that disputing factual claims to its power is cultural hegemony.
To my ears it’s the emphases that make this sound uncomfortable, rather than the broad-stroke outline. On one hand I think that empirical fact is special in having a reality independent of human existence. On the other hand, I don’t suppose any of our epistemological methods give us perfect access to the reality I posit. Having claimed my beliefs are not, with 100% certainty, raw and unmediated rays of truth, I have opened the door to negotiating how certain my beliefs are, and admitting that other ways perspectives might have a point that I cannot dismiss a priori. I am all for admitting that our beliefs are uncertain and our categories subject to revision, otherwise why would I bother with statistics, which is my day job?
Also, how about beliefs that are not about facts as such? Does human knowledge transmission at large deal mostly in transmission of precise factual claims about reproducible experiments, or is there a whole bunch of other stuff going on with an indirect relationship to facts about gross physical reality, and some kind of active role in creating whatever passes for facts in the negotiated social reality?
Option B. We need the tools unpack the other propensities in the uses of the language around belief, and disentangle what is going with cheap talk and signalling. We do deploy belief in a variety of ways, often emotional, often figurative.1
How good are we at forming good facty beliefs? Scott Alexander found the irritating case study of bodybuilders suggests…. not very?
Antonio García Martínez, in The Holy Church of Christ Without Christ, belabours the point that faith-based engagement is how we predominantly engage with the world. Or, as Herbert Simon and Eliezer Yudkowsky could have co-authored, belief is how a heuristic feels from the inside.
Belief and groups
Related, the levels of simulacra model is one attempt to dissect this. At the other end of that link is an nifty analysis of using beliefs about COVID-19 as a test case. I find this analysis more powerful than bullshit-based analysis, which is a blunter tool (and also tends to be used that your opponent is doing it and not you.)
Saying that something not explicitly religious has “become a religion” has become a mark of pseudo-cleverness. By “religious,” people tend to mean irrational, tribal, and devotional — and in aiming the adjective at phenomena they think little of, they show it is inherently pejorative. Tara Isabella Burton’s Strange Rites has the virtue of assuming humanity is sufficiently “religious” that such qualities, among others, defy transcendence. She asks not “is this religious” but “what does it mean for this to be religious.”[…]
He has analysis of his own of dynamics here:
Participation in online communities requires far less personal commitment than those of real life. And commitment has often cloaked hypocrisy. Men could play the role of God-fearing family men in public, for example, while cheating on their wives and abusing their kids. Being a respectable member of their community depended, to a great extent, on being a family man, but being a respectable member of online right-wing communities depends only on endorsing the concept.
See also Movement design.
The rationality of the Great Society
Scott Alexander, Contra Weyl On Technocracy
🏗, quote Constantin, In defense of individualist culture, and Hayek’s “constructivist fallacy”, Timothy Morton’s Hyperobjects, and Berkes and Folke’s “local knowledge”, pragmatist notions of a belief’s “cash value”, local versus global truth, and all the other dissections of these problems, and wonder about idiosyncratic spontaneous group order etc. Discuss Social Capital and other economic framings as a method for making metis “legible”. The Master Currency displacing other possible currencies. Or, to have this phrased in a manner intelligible to management, Florent Crivello, The Efficiency-Destroying magic of tidying up. Contrast this with the Hanson opinion on the grab-for-power that invoking metis can mask:
Apparently most for-profit firms could make substantially more profits if only they’d use simple decision theory to analyze key decisions. Execs’ usual excuse is that key parameters are unmeasurable, but Hubbard argues convincingly that this is just not true.[…]
I say that their motives are more political: execs and their allies gain more by using other more flexible decision making frameworks for key decisions, frameworks with more wiggle room to help them justify whatever decision happens to favor them politically. Decision theory, in contrast, threatens to more strongly recommend a particular hard-to-predict decision in each case. As execs gain when the orgs under them are more efficient, they don’t mind decision theory being used down there. But they don’t want it up at their level and above, for decisions that say if they and their allies win or lose.
Sam Popowich’s invective Lawful Neutral re-invents some of these anlyses, also with “liberalism” assuming the role of the state in his version:
Liberalism — like the necessarily undemocratic capitalism for which it serves as an alibi — seeks to reduce human life to a predictable, exploitable, profitable minimum. It adopts the simplest social ontology (individualism) to make its formalism work, to make its algorithms or procedures appear universally applicable. Liberalism demands a strict division between form (the system of rules) and content (the messy details of social entanglement) to make reality tractable to its logic. Artificial intelligence likewise requires that its form (code) be separate from its content (data). Machines require the simplest possible data on which to work (binary numbers, for example) to make their procedures uniform and generalizable and the world computable.
He also equates the project of procedural AI with liberalism.
If AI can’t necessarily replicate human intelligence, it nevertheless precisely models the sort of intelligence needed to make liberalism coherent.
From there, he argues, the system of liberalism produces individualism and negates community, which is antitehtical to the messy social world of humans otherwise. Not covered in his article: Mass systems of governance which do not have the asserted pathologies of liberalism and yet also produce great good for a great number. (I imagine he would suggest Marxism? Some kind of anarchism? Libertarian something?s)
C&C Robin Hanson’s imagined dialogue about the role of simplification in models in science and engineering, and the application of reductionism to the study of people. (tl;dr fraught but the world is too complex to interact with, if you do not choose good simplification with which to model it.)
Antidote, M. Joh Harrison
I’m not interested in an embodied and localised knowledge. I had enough of it as a child in the early 1950s, among people whose top argument was, “Because I know better.” They didn’t want the NHS. They didn’t want vaccination. They didn’t want the kids that survived to waste their time on education. They didn’t want science. All sense was common sense: they were the well, and your role as a child was to drink what you were given. Anything else, from the welfare state to astrophysics, was a challenge to traditional hierarchies. After you’d tried, and had it drilled into you how worthless your fancy new ideas were, your ambition was to quickly and quietly exit their radius of control and enter the de-localised intellectual funfair of modernity, with its fantastically advanced concepts such as “abstract thought”. Your secondary ambition was to work on a politics that got shot of all that forever. That’s really what the 60s was about if you lived where I lived. It was a revolt about what kind of knowledge you could have of the world, and how you could get it. I never regretted running away from the Trump-like epistemics of postwar semi-industrialised, semi-rural England. All I regret is that we didn’t quite achieve full escape velocity and get rid of its limiting ideas forever, so they couldn’t crawl back and infect everything again. I never want to return to that nightmare, or sympathise with it, or “understand” it, or give it any more than this single paragraph of the oxygen of analysis.
Policy and Statistical learning
TODO. Brief digression on how legibility and management looks as a statistical learning problem. We know that constructing policies is costly in data, and we know that administrative procedures frequently do not have much data from repeated trials of what works. We also know that coming up with policies (in a machine learning or in a political definition) is computationally challenging and data hungry. How does the need to bow to the ill-fitting bureaucracy of the Great Society resemble having to work with an underfit estimator of the optimal policy? What does that tell us about, e.g. optimal jurispudence? Possibly something. Or possibly the metaphor doesn’t work; after all, what is the optimisation problem one solves?
And in any case, scientists at their most precise and factual still uses emotion and metaphor to do communicative work. That is, I suspect, practically unavoidable, or worse, avoiding it would be inefficient.↩︎