Dunning-Kruger theory of mind

On anthropomorphising humans

On our known-terrible ability to know our terrible inabilities to know about others. Our constant and reliable ability to 1) think we know more about others than we do and 2) reliably fail to learn that we do not. Manifesting in, e.g. pluralistic ignorance, the out-group homogeneity effect, idiosyncratic rater effect, many other biases I do not yet know names for.

Terminology note: The title here is a reference to the title of the influential paper (Kruger and Dunning 1999), “Unskilled and unaware of it”, rather than their model. Interpretation of the Dunning Kruger effect model is a subtle affair, which I do not need to do here. In the common parlance, what I want to evoke is knowing just enough to be dangerous but not enough to know your limits.

Here is a catalogue of limits I would like take in to account so that I can be less dangerous.

On evaluating others

The feedback fallacy:

The first problem with feedback is that humans are unreliable raters of other humans. Over the past 40 years psychometricians have shown in study after study that people don’t have the objectivity to hold in their heads a stable definition of an abstract quality, such as business acumen or assertiveness, and then accurately evaluate someone else on it. Our evaluations are deeply colored by our own understanding of what we’re rating others on, our own sense of what good looks like for a particular competency, our harshness or leniency as raters, and our own inherent and unconscious biases. This phenomenon is called the idiosyncratic rater effect, and it’s large (more than half of your rating of someone else reflects your characteristics, not hers) and resilient (no training can lessen it). In other words, the research shows that feedback is more distortion than truth.

The real question is why individuals often try to avoid feedback that provides them with more accurate knowledge of themselves.

On erroneously thinking my experience is universal

Another tendency of interest was explained neatly by Tanner Greer, for the particularly important case of opinion leaders and pundits and how they are by definition especially likely to be detached from typical experience. Our think pieces about how the world works are likely to come from pundits with a public profile, that being what a public profile is, and the world inhabited by such people is different than the world that the majority inhabit. We are getting our models of the world from people in a special bubble of experiential bias.

This is the first difficulty that comes with a growing follower count on twitter. As the count grows, the number of different communities you are projecting to grows as well. Soon, large numbers of people start to follow because they see you as a representative of a certain strain of thought, or as a key voice in a particular conversation they care about. They are not sympathetic to your ideas or even merely intellectually interested in them; instead they follow you to keep tabs on what you and people like you are saying. Many actually despise you and your ideas to their core (in twitterese, they are a “hate follow”).

My friend Matthew Stinson described this shift as that point where “interactions stop being inquisitive and start getting accusatory. “Points for my side-ism” becomes a real thing.” Twitter’s retweet mechanism makes this problem far worse. All one needs is a snarky RT for these people to take what a thought they dislike and BOOM!, project it into communities it was never intended for as the perfect example of what they all should be hating at that moment.

Thus if you have a large follower account your experience on twitter goes like this: you share a thought optimized for Group X. Members of Group Y, Group Z, and Group V automatically start sharing it as the textbook example of why Group X deserves crucifixion.

On erroneously thinking my experience is not universal

TBD. Out-group effects.

On understanding how others think

Todo: raid this for references: Why You’re Constantly Misunderstood on Slack (and How to Fix It).

People of higher status are more likely to think that those who disagree with them are stupid or biased — even when their high status is the result of a random process. (Brown-Iannuzzi et al. 2020)

The researchers first analyzed data from 2,374 individuals who participated in the 2016 American National Election Studies Time Series Survey, a nationally representative survey of U.S. citizens. As expected, liberals and conservatives were more likely to describe the opposing political party as uninformed, irrational, and/or biased compared to their own party.

Importantly, the researchers found that this was especially true among those with a higher socio-economic status. Among more liberal participants, higher status individuals displayed more naive realism toward Republicans. Among more conservative participants, higher status individuals displayed more naive realism toward Democrats.

In a follow-up experiment, the researchers experimentally manipulated people’s sense of status through an investment game. The study of 252 participants found that those who were randomly told they had performed “better than 89% of all players to date” were more likely to say that people who disagreed with their investment advice were biased and incompetent.

For ages my favourite go-to-bias to think on here was Fundamental attribution bias, which seems ubiquitous to me.

In social psychology, fundamental attribution error (FAE), also known as correspondence bias or attribution effect, is the tendency for people to under-emphasize situational explanations for an individual’s observed behavior while over-emphasizing dispositional and personality-based explanations for their behavior. This effect has been described as “the tendency to believe that what people do reflects who they are”.

Apparently it has put in doubt though? (Epstein and Teraspulsky 1986; Malle 2006) TODO: investigate.


Jacob Falkovitch, in Is Rationalist Self-Improvement Real?, has ideas about the effectiveness of trying to be more rational (in more areas than theory of mind). TODO: question his apparent assumption that casual commenters on rationality blogs are measurably more committed to rationality than drive-by commenter on any other site. That rationality blogs attract or cultivate more rational individuals is a hypothesis to test.

If we were machines would we understand other machines this badly?

TBD. Consider minds as machine learning and see what that would tell us about learners that learn to model learners. Possibly that mean predictive coding.


Acemoglu, Daron, Victor Chernozhukov, and Muhamet Yildiz. 2006. “Learning and Disagreement in an Uncertain World.” Working Paper 12648. Working Paper Series. National Bureau of Economic Research. https://doi.org/10.3386/w12648.
Brewer, Marilynn B. 1993. “Social Identity, Distinctiveness, and In-Group Homogeneity.” Social Cognition 11 (1): 150–64. https://doi.org/10.1521/soco.1993.11.1.150.
Brown-Iannuzzi, Jazmin L., Kristjen B. Lundberg, Aaron C. Kay, and B. Keith Payne. 2020. “A Privileged Point of View: Effects of Subjective Socioeconomic Status on Naïve Realism and Political Division.” Personality and Social Psychology Bulletin, May, 0146167220921043. https://doi.org/10.1177/0146167220921043.
Byron, Kristin. 2008. “Carrying Too Heavy a Load? The Communication and Miscommunication of Emotion by Email.” The Academy of Management Review 33 (2): 309–27. https://doi.org/10.2307/20159399.
Epstein, Seymour, and Laurie Teraspulsky. 1986. “Perception of Cross-Situational Consistency.” Journal of Personality and Social Psychology 50 (6): 1152–60. https://doi.org/10.1037/0022-3514.50.6.1152.
Kruger, Justin, and David Dunning. 1999. “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” Journal of Personality and Social Psychology 77: 1121–34. https://doi.org/10.1037/0022-3514.77.6.1121.
Kruger, Justin, Nicholas Epley, Jason Parker, and Zhi-Wen Ng. 2005. “Egocentrism over e-Mail: Can We Communicate as Well as We Think?” Journal of Personality and Social Psychology 89 (6): 925–36. https://doi.org/10.1037/0022-3514.89.6.925.
Lambert, Tracy A., Arnold S. Kahn, and Kevin J. Apple. 2003. “Pluralistic Ignorance and Hooking Up.” The Journal of Sex Research 40 (2): 129–33. https://doi.org/10.1080/00224490309552174.
Laubert, Christoph, and Jennifer Parlamis. 2019. “Are You Angry (Happy, Sad) or Aren’t You? Emotion Detection Difficulty in Email Negotiation.” Group Decision and Negotiation 28 (2): 377–413. https://doi.org/10.1007/s10726-018-09611-4.
Malle, Bertram F. 2006. “The Actor-Observer Asymmetry in Attribution: A (surprising) Meta-Analysis.” Psychological Bulletin 132 (6): 895–919. https://doi.org/10.1037/0033-2909.132.6.895.
O’Gorman, Hubert J., and Stephen L. Garry. 1976. “Pluralistic IgnoranceA Replication and Extension.” Public Opinion Quarterly 40 (4): 449–58. https://doi.org/10.1086/268331.
Prentice, Deborah A., and Dale T. Miller. 1996. “Pluralistic Ignorance and the Perpetuation of Social Norms by Unwitting Actors.” In Advances in Experimental Social Psychology, edited by Mark P. Zanna, 28:161–209. Academic Press. https://doi.org/10.1016/S0065-2601(08)60238-5.
Simonovits, Gábor, Gábor Kézdi, and Péter Kardos. 2017. “Seeing the World Through the Other’s Eye: An Online Intervention Reducing Ethnic Prejudice.” American Political Science Review, November, 1–8. https://doi.org/10.1017/S0003055417000478.
Willer, Robb, Ko Kuwabara, and Michael W. Macy. 2009. “The False Enforcement of Unpopular Norms.” American Journal of Sociology 115 (2): 451–90. https://doi.org/10.1086/599250.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.