Metis and .*-rationality

High modernism, spontaneous order, legibility, the Great Society, technocracy, local knowledge, normies and sects



Spontaneous order, local knowledge and other castings of the relationship of local inscrutability and totalising modernity in the social order. I have no original thoughts on this, but I like to keep links on this theme where I can see them so that they don’t bite me.

Muse about standards.

The rationality of the Great Society

Scott Alexander, Contra Weyl On Technocracy

A Big Little Idea Called Legibility.

🏗, quote Constantin, In defense of individualist culture, and Hayek’s “constructivist fallacy”, Timothy Morton’s Hyperobjects, and Berkes and Folke’s “local knowledge”, pragmatist notions of a belief’s “cash value”, local versus global truth, and all the other dissections of these problems, and wonder about idiosyncratic spontaneous group order etc. Discuss Social Capital and other economic framings as a method for making metis “legible”. The Master Currency displacing other possible currencies. Or, to have this phrased in a manner intelligible to management, Florent Crivello, The Efficiency-Destroying magic of tidying up. Contrast this with the Robin Hanson opinion on the grab-for-power that invoking metis can mask:

Apparently most for-profit firms could make substantially more profits if only they’d use simple decision theory to analyze key decisions. Execs’ usual excuse is that key parameters are unmeasurable, but Hubbard argues convincingly that this is just not true.[…]

I say that their motives are more political: execs and their allies gain more by using other more flexible decision making frameworks for key decisions, frameworks with more wiggle room to help them justify whatever decision happens to favor them politically. Decision theory, in contrast, threatens to more strongly recommend a particular hard-to-predict decision in each case. As execs gain when the orgs under them are more efficient, they don’t mind decision theory being used down there. But they don’t want it up at their level and above, for decisions that say if they and their allies win or lose.

Sam Popowich’s invective Lawful Neutral re-implements some of these analyses, with “liberalism” assuming the role of the state (I can’t remember which liberalism he is talking about specifically):

Liberalism — like the necessarily undemocratic capitalism for which it serves as an alibi — seeks to reduce human life to a predictable, exploitable, profitable minimum. It adopts the simplest social ontology (individualism) to make its formalism work, to make its algorithms or procedures appear universally applicable. Liberalism demands a strict division between form (the system of rules) and content (the messy details of social entanglement) to make reality tractable to its logic. Artificial intelligence likewise requires that its form (code) be separate from its content (data). Machines require the simplest possible data on which to work (binary numbers, for example) to make their procedures uniform and generalizable and the world computable.

He also equates the project of procedural AI with liberalism.

If AI can’t necessarily replicate human intelligence, it nevertheless precisely models the sort of intelligence needed to make liberalism coherent.

From there, he argues, the system of liberalism produces individualism and negates community, which is antithetical to the messy social world of humans otherwise. Not covered in his article: Mass systems of governance which do not have the asserted pathologies of liberalism and yet also produce great good for a great number. (I imagine he would suggest Marxism? Some kind of anarchism? Libertarian something? Neofeudalism?)

C&C Robin Hanson’s imagined dialogue about the role of simplification in models in science and engineering, and the application of reductionism to the study of people. (tl;dr fraught, but the world is too complex to interact with, if you do not choose some simplification with which to model it, so choose simplifications based on opportunity costs.)

Antidote, M. John Harrison

I’m not interested in an embodied and localised knowledge. I had enough of it as a child in the early 1950s, among people whose top argument was, “Because I know better.” They didn’t want the NHS. They didn’t want vaccination. They didn’t want the kids that survived to waste their time on education. They didn’t want science. All sense was common sense: they were the well, and your role as a child was to drink what you were given. Anything else, from the welfare state to astrophysics, was a challenge to traditional hierarchies. After you’d tried, and had it drilled into you how worthless your fancy new ideas were, your ambition was to quickly and quietly exit their radius of control and enter the de-localised intellectual funfair of modernity, with its fantastically advanced concepts such as “abstract thought”. Your secondary ambition was to work on a politics that got shot of all that forever. That’s really what the 60s was about if you lived where I lived. It was a revolt about what kind of knowledge you could have of the world, and how you could get it. I never regretted running away from the Trump-like epistemics of postwar semi-industrialised, semi-rural England. All I regret is that we didn’t quite achieve full escape velocity and get rid of its limiting ideas forever, so they couldn’t crawl back and infect everything again. I never want to return to that nightmare, or sympathise with it, or “understand” it, or give it any more than this single paragraph of the oxygen of analysis.

George argues AI and automation are at odds:

… the vast majority of use-cases for AI, especially the flashy kind that behaves in a "human-like" way, might be just fixing coordination problems around automation.

AI, from this perspective, is something like “the computational overhead of metis”.

Thus we end up with rather complex jobs; Where something like AGI could be necessary to fully replace the person. But at the same time, these jobs can be trivially automated if we redefine the role and take some of the fuzziness out.

A bartender robot is beyond the dreams of contemporary engineering. A cocktail making machine, conveyer belt (or drone) that delivers drinks, ordering and paying through a tablet on your table... beyond trivial.

I would like to return to this point. Is legibility just the simplest thing?

Tanner Greer in Xi Jinping’s War on Spontaneous Order argues that the “Common Prosperity” campaign may be understood a war on late stage capitalism:

The essential challenge facing any observer analyzing this campaign is this: what accounts for the target list? What do K-pop fan groups, after school tutoring companies, Meituan delivery men, online algorithms, plastic surgeons, overheated housing markets, celebrity ranking lists, and tech monopolies have in common? It is not sufficient to say that China “pivot[s] to the state” or proclaim that Xi Jinping “aims to rein in Chinese capitalism.” Xi Jinping is not reigning in capitalism writ large; Beijing is not scrapping market mechanisms altogether. Semiconductor foundries, agricultural conglomerates, and Christmas light factories (to choose three examples of hundreds) have been untouched by Xi’s ‘common prosperity’ agenda. It is a very select slice of Chinese capitalism that is being “reined in.”

So what decides what industries must be reined in by state intervention while others remain unaffected? …I’ve personally adopted what has (thusfar) been an effective rule of thumb for predicting which industries will get the axe: can one imagine a Brooklyn hipster describing said industries’ products or operations as an “artifact of late capitalism”? If the answer is “yes” then that industry is on the chopping block (Livestreamers, you are next).

Policy and Statistical learning

TODO. Brief digression on how legibility and management looks as a statistical learning problem. We know that constructing policies is costly in data, and we know that administrative procedures frequently do not have much data from repeated trials of what works. We also know that coming up with policies (in a machine learning or in a political definition) is computationally challenging and data hungry. How does the need to bow to the ill-fitting bureaucracy of the Great Society resemble having to work with an underfit estimator of the optimal policy? What does that tell us about, e.g. optimal jurispudence? Possibly something. Or possibly the metaphor doesn’t work; after all, what is the optimisation problem one solves?

Lanier (2010) has a notion about “post-symbolic communication” as something that exists beyond the symbolic communication that modernity’s legibility favours, and I suppose the “pre-symbolic communication” possibly in the metis regime.

Suppose we had the ability to morph at will, as fast as we can think. What sort of language might that make possible? Would it be the same old conversation, or would we be able to “say” new things to one another?

For instance, instead of saying, “I’m hungry; let’s go crab hunting,” you might simulate your own transparency so your friends could see your empty stomach, or you might turn into a video game about crab hunting so you and your compatriots could get in a little practice before the actual hunt.

I call this possibility “post symbolic communication.” It can be a hard idea to think about, but I find it enormously exciting. It would not suggest an annihilation of language as we know it—symbolic communication would continue to exist—but it would give rise to a vivid expansion of meaning.

This is an extraordinary transformation that people might someday experience. We’d then have the option of cutting out the “middleman” of symbols and directly creating shared experience. A fluid kind of concreteness might turn out to be more expressive than abstraction.

In the domain of symbols, you might be able to express a quality like “redness.” In postsymbolic communication, you might come across a red bucket. Pull it over your head, and you discover that it is cavernous on the inside. Floating in there is every red thing: there are umbrellas, apples, rubies, and droplets of blood. The red within the bucket is not Plato’s eternal red. It is concrete. You can see for yourself what the objects have in common. It’s a new kind of concreteness that is as expressive as an abstract category.

This is perhaps a dry and academic-sounding example. I also don’t want to pretend I understand it completely. Fluid concreteness would be an entirely new expressive domain. It would require new tools, or instruments, so that people could achieve it.

I imagine a virtual saxophone-like instrument in virtual reality with which I can improvise both golden tarantulas and a bucket with all the red things. If I knew how to build it now, I would, but I don’t.

I consider it a fundamental unknown whether it is even possible to build such a tool in a way that would actually lift the improviser out of the world of symbols. Even if you used the concept of red in the course of creating the bucket of all red things, you wouldn’t have accomplished this goal.

I spend a lot of time on this problem. I am trying to create a new way to make software that escapes the boundaries of preexisting symbol systems. This is my phenotropic project.

The point of the project is to find a way of making software that rejects the idea of the protocol. Instead, each software module must use emergent generic pattern-recognition techniques—similar to the ones I described earlier, which can recognize faces—to connect with other modules. Phenotropic computing could potentially result in a kind of software that is less tangled and unpredictable, since there wouldn’t be protocol errors if there weren’t any protocols. It would also suggest a path to escaping the prison of predefined, locked-in ontologies like MIDI in human affairs.

I am not convinced, for reasons I might go into at some point.

References

Bernhard, Helen, Urs Fischbacher, and Ernst Fehr. 2006. Parochial altruism in humans.” Nature 442 (7105): 912–15.
Bowles, Samuel, and Herbert Gintis. 2002. Social Capital And Community Governance.” The Economic Journal 112 (483): F419–36.
Gintis, Herbert, Eric Smith, and Samuel Bowles. 2001. Costly Signaling and Cooperation.” Journal of Theoretical Biology 213 (1): 103–19.
Hayek, Friedrich. 1979. Law, Legislation and Liberty. Vol. 3. London: Routledge And Kegan Paul Ltd.
———. n.d. The Political Order of a Free People. London: Routledge And Kegan Paul Ltd.
Hayek, Friedrich A. 1945. The Use of Knowledge in Society.” The American Economic Review 35 (4): 519–30.
———. 1988. The Fatal Conceit: The Errors of Socialism. Vol. 1. Routledge.
———. 1996. Individualism and Economic Order. University Of Chicago Press.
———. 2001. The Road to Serfdom. Routledge.
Henrich, Joseph, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, Herbert Gintis, Richard McElreath, et al. 2005. ‘Economic Man’ in Cross-Cultural Perspective: Behavioral Experiments in 15 Small-Scale Societies.” Behavioral and Brain Sciences 28: 795.
James, William. 2004a. Pragmatism: A New Name for Some Old Ways of Thinking.
———. 2004b. The Meaning of Truth.
———. 2008. The Will to Believe, and Other Essays in Popular Philosophy.
Lanier, Jaron. 2010. You Are Not a Gadget: A Manifesto. 1st ed. New York: Alfred A. Knopf.
Ostrom, Elinor. 1990. Governing the Commons: The Evolution of Institutions for Collective Action (Political Economy of Institutions and Decisions). Cambridge University Press.
———. 1992. “The Rudiments of a Theory of the Origins, Survival, and Performance of Common Property Institutions.” Making the Commons Work: Theory, Practice and Policy.
———. 1998. A Behavioral Approach to the Rational Choice Theory of Collective Action.” The American Political Science Review 92: 1–22.
———. 2000. Collective Action and the Evolution of Social Norms.” The Journal of Economic Perspectives 14: 137–58.
Panarchy: Understanding Transformations in Human and Natural Systems. 2001. Island Press.
Scott, J. 1998. Thinking Like a State.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.