On measuring the minds of people and possibly even discovering something about them thereby.

Causality and confounding

Some of the attempt to measure people’s minds end up tautological rather than explanatory.@BatesonMind2002 calls this a dormitive principle problem.

A common form of empty explanation is the appeal to what I have called “dormitive principles,” borrowing the word dormitive from Molière. There is a coda in dog Latin to Molière's Le Malade Imaginaire, and in this coda, we see on the stage a medieval oral doctoral examination. The examiners ask the candidate why opium puts people to sleep. The candidate triumphantly answers, "Because, learned doctors, it contains a dormitive principle.

Aside: he also has a solution which looks a lot like a causal graph.

A better answer to the doctors' question would involve, not the opium alone, but a relationship between the opium and the people. In other words, the dormitive explanation actually falsifies the true facts of the case but what is, I believe, important is that dormitive explanations still permit abduction. Having enunciated a generality that opium contains a dormitive principle, it is then possible to use this type of phrasing for a very large number of other phenomena. We can say, for example, that adrenaline contains an enlivening principle and reserpine a tranquilizing principle. This will give us, albeit inaccurately and epistemologically unacceptably, handles with which to grab at a very large number of phenomena that appear to be formally comparable. And, indeed, they are formally comparable to this extent, that invoking a principle inside one component is in fact the error that is made in every one of these cases.

How are other nebulous univariate influences stuff like the teamwork factor (Weidmann and Deming 2020) different if at all?


Various links on the g-factor kerfuffle, by which I mean the idea that there is a useful, simple, explanatory, heritable, identifiable, falsifiable, scalar, consistent measure of human cognitive capacity. Closely coupled with IQ, which is supposed to measure some approximation of it.

I know little about this concept. FWIW I feel intuitively that bare minimum a more interesting question would at least be about psychometric nonparametric dimension reduction if you wanted to measure what humans could do. And it would involve a specific task-specific predictive loss function.

But lots of arguments in the public sphere are not about that. They seem to be about whether human mental capacities are in some sense univariate and linear, which is a laughably restricted hypothesis to test in this golden age of machine learning, and of richly parameterised models of every other part of biology. I am sure there must be a reason for that poverty. Is this a straw man set up by opponents? To an outsider like myself it resembles watching a schism in the automotive industry about whether coal-powered cars are better than wood-powered cars. Why this particular question? What is the context? Is the desired result of this modeling choice flexibility of application or robustness of inference, or to salvage something from a degenerate research program?

Maybe the reason for my confusion is that the most prominent voices who argue over this have the least nuanced arguments precisely because of this gain prominence by a toxoplasma of rage irritation effect. Maybe if I were to dig down into a proper backgrounder I would find much more of interest here? The problem is that I care about the question primarily as something I might need to understand if I am caught up in a shouting match about it, and that latter eventuality has not yet arisen.

Shalizi’s \(g\), a Statistical Myth, Dalliard’s rejoinder (perhaps best read with their intro to the background which is written with a true fan’s dedication). Zack Davis’ Univariate fallacy is a useful framing for both of the above. IQ defined. The philosophical coda to M. Taylor Saotome-Westlake’s Book Review: Charles Murray's Human Diversity: The Biology of Gender, Race, and Class has some analysis of the co-ordination-on-belief problem which is another angle on why discourse on g-factors is vexed.

Nassim Taleb is aggressive as usual: IQ is largely a pseudoscientific swindle. Steve Hsu has a different take. I feel like if I had time I might want to take apart those last two articles side by side and see where they talk across each other, because they seem to exemplify a common pattern.

Big-5 personality traits

How are they supposed to work? How did they choose 5? Would like to know 🤷‍♂

Bateson, Gregory. 2002. Mind and Nature: A Necessary Unity. New edition edition. Cresskill, N.J: Hampton Press.

Duckworth, Angela Lee, Patrick D. Quinn, Donald R. Lynam, Rolf Loeber, and Magda Stouthamer-Loeber. 2011. “Role of Test Motivation in Intelligence Testing.” Proceedings of the National Academy of Sciences 108 (19): 7716–20.

Heene, Moritz. 2008. “A Rejoinder to Mackintosh and Some Remarks on the Concept of General Intelligence,” August.

Jonas, Eric, and Konrad Paul Kording. 2017. “Could a Neuroscientist Understand a Microprocessor?” PLOS Computational Biology 13 (1): e1005268.

Weidmann, Ben, and David J Deming. 2020. “Team Players: How Social Skills Improve Group Performance.” Working Paper 27071. Working Paper Series. National Bureau of Economic Research.