Intentional language is ok

The point of teleology, Cognitive ergonomics of anthropomorphism

2026-01-23 — 2026-02-04

Wherein the Wason cards are turned, the beer and the sixteen-year-old being examined, and intentional terms are employed as a stance for predicting machines, with overshoot into pareidolia noted.

adaptive
adversarial
bounded compute
collective knowledge
cooperation
culture
distributed
economics
ethics
evolution
extended self
gene
incentive mechanisms
institutions
mind
networks
neuron
rhetoric
snarks
social graph
sociology
wonk
Figure 1: Each card has a number on one side and a patch of colour on the other. Which card or cards must be turned over to test the idea that if a card shows an even number on one face, then its opposite face is red?

The Wason selection task is a logic puzzle with a set of cards.

Each card has a number on one side and a patch of colour on the other. Which card or cards must be turned over to test the idea that if a card shows an even number on one face, then its opposite face is red?

Figure 2

In its abstract form, the task is notoriously difficult. To test the rule “If P, then Q,” logic says we must check the P case (to see whether it has Q) and the not Q case (to see whether it has P). Most people fail to flip the “not Q” card and fall prey to confirmation bias.

However, when the task is framed as a social problem rather than a logic problem, we suddenly get good at it:

Figure 3: Each card has an age on one side and a drink on the other. Which card(s) must be turned over to test the idea that if you are drinking alcohol, then you must be over 18?

Suddenly, the logic is intuitive. Almost everyone knows to check the beer drinker (P) and the 16-year-old (not-Q).

Why is the second task so much easier? Evolutionary psychologists like Leda Cosmides and John Tooby (Cosmides and Tooby 1992, 1994) argue that human beings haven’t evolved to be general-purpose logic processors. Instead, we’ve evolved specialized “modules” for social exchange; our brains are optimized for reasoning about social contracts, intentions, and agency.

For our ancestors, survival depended on stuff like the ability to detect “cheaters”—those who take a benefit without paying the cost or following the rule. It would be totally reasonable to suppose that we tend to be good at reasoning about the intentions, obligations, and transgressions of other agents.

1 The Intentional Stance

If our most powerful cognitive hardware is dedicated to social reasoning, it follows that we might reason better about complex systems when we treat them as social agents.

This is what philosopher Daniel Dennett calls The Intentional Stance. Dennett argues (Dennett 1998) that there are three ways to predict the behaviour of a system:

  1. The Physical Stance: Predicting based on physics and chemistry (e.g., “The water will boil because of heat”).
  2. The Design Stance: Predicting based on how something was built (e.g., “The alarm will go off because the timer is set”).
  3. The Intentional Stance: Predicting based on what the system “wants” or “believes” (e.g., “The chess computer is trying to protect its Queen”).

While the physical stance is often claimed to be the most “accurate”, it is kinda unwieldy. If we want to predict how a Large Language Model (LLM) or a complex algorithm will behave, talking about it purely algorithmically is clunky, unwieldy, and unintuitive (e.g., “The model will output token T with probability P given context C because of the weights W and the training data D…”). The Design Stance is better, but still complex (e.g., “The model is designed to predict the next token based on patterns in its training data…”). But saying “The chess computer is trying to protect its Queen”, in an intentional style, seems pretty compact.

FWIW, I’m not even convinced the intentional stance is especially “inaccurate” as such; we don’t really know what intentions are for general physical systems.

But! Social convention says we don’t get away with using intentional language, at least for some classes of objects. If I talk in a biology class about dogs as thinking, or on the internet about LLMs as thinking, then I’m likely to be accused of anthropomorphism—the attribution of human traits to non-human things. This is commonly regarded as a “category error” or a sign of intellectual laziness.

OTOH, the Wason task suggests that anthropomorphism might actually be a form of cognitive ergonomics. Maybe I see intent everywhere because it helps me think better about systems? By using intentional language—words like wants, knows, believes, tries—we are “porting” a complex technical problem into our brain’s most high-performance processor: the social reasoning module.

Just as we find it easier to spot a “cheater” in a bar than a logic error on a card, we find it easier to debug a system by imagining its “intentions.” Admitting intentional language into our technical descriptions isn’t necessarily a claim that the machine is “conscious”; rather, it is an admission that we are human and we think best when we pretend the world is looking back at us.

Maybe even our reasoning about humans is also anthropomorphizing, in the same sense? I am unsure about this, but I cannot find a crisp way of making the distinction.

2 Overshoot

What does it look like when we lean too hard into intentionality? Our brains are so well-optimized for social detection that we often see “agency” where there is only noise. This is the cognitive root of Pareidolia—the tendency to see faces in clouds or burnt toast—and it can lead us down a rabbit hole of metaphysical overreach.

If we find the intentional stance useful for a dog, then a computer, then a “smart” thermostat, where does it end? We risk sliding into a functional Panpsychism, the belief that consciousness or mind-like qualities are fundamental properties of all matter. Or perhaps we find ourselves returning to an ancient Animism, treating the river, the forest, and the thunderstorm as entities with their own agendas.

Is this “overshooting” an actual problem? From a strict materialist perspective, it might be a delusion (although, once again, what even are intentions?). From a pragmatic perspective, if an animistic view of a forest leads a community to manage its resources more sustainably than a “resource-extraction” view, then the “error” brings biological utility. Similarly, if treating a complex neural network as a “personality” allows a researcher to predict its failure modes better than a statistical analysis would, the anthropomorphism has paid its way. We may have to accept that our brains are biased toward seeing ghosts in the machine, but maybe that works just fine.

3 References

Cosmides, and Tooby. 1992. Cognitive Adaptations for Social Exchange.” The Adapted Mind: Evolutionary Psychology and the Generation of Culture.
———. 1994. “Better Than Rational: Evolutionary Psychology and the Invisible Hand.” The American Economic Review.
Dennett. 1998. The Intentional Stance. A Bradford Book.
Dresow, and Love. 2023. Teleonomy: Revisiting a Proposed Conceptual Replacement for Teleology.” Biological Theory.
Foss. 1994. On the Evolution of Intentionality as Seen from the Intentional Stance.” Inquiry.
Ji. 2024. Demystify ChatGPT: Anthropomorphism Around Generative AI.” GRACE: Global Review of AI Community Ethics.
Mishra, and Oster. n.d. “Human or AI? Understanding the Learning Implications of Anthropomorphized Generative AI.”
Sperber, and Girotto. 2002. Use or Misuse of the Selection Task? Rejoinder to Fiddick, Cosmides, and Tooby.” Cognition.
Sperber, and Mercier. 2012. Reasoning as a Social Competence.” In Collective Wisdom.