On our known-terrible ability to know our terrible inabilities to know about others. Our constant and reliable ability to 1) think we know more about others than we do and 2) reliably fail to learn that we do not. Manifesting in, e.g. pluralistic ignorance, the out-group homogeneity effect, idiosyncratic rater effect, many other biases I do not yet know names for.
Terminology note: The title here is a reference to the title of the influential paper (Kruger and Dunning 1999), “Unskilled and unaware of it”, rather than their model. Interpretation of the Dunning Kruger effect model is a subtle affair, which I do not need to do here. In the common parlance, what I want to evoke is knowing just enough to be dangerous but not enough to know your limits.
Here is a catalogue of limits I would like take in to account so that I can be less dangerous.
On evaluating others
The first problem with feedback is that humans are unreliable raters of other humans. Over the past 40 years psychometricians have shown in study after study that people don’t have the objectivity to hold in their heads a stable definition of an abstract quality, such as business acumen or assertiveness, and then accurately evaluate someone else on it. Our evaluations are deeply colored by our own understanding of what we’re rating others on, our own sense of what good looks like for a particular competency, our harshness or leniency as raters, and our own inherent and unconscious biases. This phenomenon is called the idiosyncratic rater effect, and it’s large (more than half of your rating of someone else reflects your characteristics, not hers) and resilient (no training can lessen it). In other words, the research shows that feedback is more distortion than truth.
The real question is why individuals often try to avoid feedback that provides them with more accurate knowledge of themselves.
On erroneously thinking my experience is universal
Another tendency of interest was explained neatly by Tanner Greer, for the particularly important case of opinion leaders and pundits and how they are by definition especially likely to be detached from typical experience. Our think pieces about how the world works are likely to come from pundits with a public profile, that being what a public profile is, and the world inhabited by such people is different than the world that the majority inhabit. We are getting our models of the world from people in a special bubble of experiential bias.
This is the first difficulty that comes with a growing follower count on twitter. As the count grows, the number of different communities you are projecting to grows as well. Soon, large numbers of people start to follow because they see you as a representative of a certain strain of thought, or as a key voice in a particular conversation they care about. They are not sympathetic to your ideas or even merely intellectually interested in them; instead they follow you to keep tabs on what you and people like you are saying. Many actually despise you and your ideas to their core (in twitterese, they are a “hate follow”).
My friend Matthew Stinson described this shift as that point where “interactions stop being inquisitive and start getting accusatory. “Points for my side-ism” becomes a real thing.” Twitter’s retweet mechanism makes this problem far worse. All one needs is a snarky RT for these people to take what a thought they dislike and BOOM!, project it into communities it was never intended for as the perfect example of what they all should be hating at that moment.
Thus if you have a large follower account your experience on twitter goes like this: you share a thought optimized for Group X. Members of Group Y, Group Z, and Group V automatically start sharing it as the textbook example of why Group X deserves crucifixion.
On erroneously thinking my experience is not universal
TBD. Out-group effects.
On understanding how others think
Todo: raid this for references: Why You’re Constantly Misunderstood on Slack (and How to Fix It).
The researchers first analyzed data from 2,374 individuals who participated in the 2016 American National Election Studies Time Series Survey, a nationally representative survey of U.S. citizens. As expected, liberals and conservatives were more likely to describe the opposing political party as uninformed, irrational, and/or biased compared to their own party.
Importantly, the researchers found that this was especially true among those with a higher socio-economic status. Among more liberal participants, higher status individuals displayed more naive realism toward Republicans. Among more conservative participants, higher status individuals displayed more naive realism toward Democrats.
In a follow-up experiment, the researchers experimentally manipulated people’s sense of status through an investment game. The study of 252 participants found that those who were randomly told they had performed “better than 89% of all players to date” were more likely to say that people who disagreed with their investment advice were biased and incompetent.
For ages my favourite go-to-bias to think on here was Fundamental attribution bias, which seems ubiquitous to me.
In social psychology, fundamental attribution error (FAE), also known as correspondence bias or attribution effect, is the tendency for people to under-emphasize situational explanations for an individual’s observed behavior while over-emphasizing dispositional and personality-based explanations for their behavior. This effect has been described as “the tendency to believe that what people do reflects who they are”.
Jacob Falkovitch, in Is Rationalist Self-Improvement Real?, has ideas about the effectiveness of trying to be more rational (in more areas than theory of mind). TODO: question his apparent assumption that casual commenters on rationality blogs are measurably more committed to rationality than drive-by commenter on any other site. That rationality blogs attract or cultivate more rational individuals is a hypothesis to test.