Is language even symbolic, bro?

Humanistic interpretability

2024-12-19 — 2025-11-03

Wherein language is considered as symbolic and as performative, Aumann’s agreement theorem is invoked, Buddhist koans are noted, and post‑symbolic communication à la Lanier is sketched.

adaptive
bounded compute
collective knowledge
cooperation
culture
economics
ethics
evolution
extended self
game theory
gene
language
networks
neuron
semantics
sociology
wonk
Figure 1

I’m wondering when we should think about language as a tool to convey symbolic information. Here I mean symbolic in the mathematical sense. To what extent does language convey pure, surface-level propositional information? When I say “there is a lion across the river,” am I simply transferring a symbolic representation of the world — a representation whose referent is the presence of a particular large feline? Or am I doing something more than symbolic? Something social, or performative, or transformative? Surely I’m doing more than just transferring information, right? I’m also signalling other things, such as that I’m the sort of person who warns others about lions. And I’m telling people that I believe there really is a lion. My tone and context massively influence how we interpret the statement; if I say it while laughing or crying, we’ll interpret it differently.

On one hand, this is a hilariously nerdy question; who but a mathematician would think language is just symbolic information? And yet we often assume this is how our words are received. Who among us hasn’t been fazed when our facts, organized into propositions that should logically compel agreement, fail to persuade someone?

Parallel to constructive rationalism and simulacra [TODO clarify].

1 We behave as if we think language is symbolic

We think we can win arguments with information.

Think also of Aumann’s agreement theorem, which suggests that rational agents with common knowledge of each other’s beliefs should not agree to disagree.

We ask people to do things and assume that once we’ve made the request, they’ll do it. But is that how people actually learn to do what we ask? Getting someone to remember to do something at the right time involves many non-trivial steps beyond merely telling them.

2 Conduit metaphor

The Conduit metaphor (Reddy 1993) describes an implicit model of communication where language is an information channel: semantic packing and unpacking happen at each end via implicit encoding and decoding rules, with shades of information theory.

This contrasts with the toolmaker metaphor, which emphasizes the active co-creation of meaning between speaker and listener.

3 The experience of words is not like symbol processing

Buddhist texts often work performatively, aiming to transform the reader’s perspective or consciousness; that’s why koans are a thing.

Here are some things I should look up that a brief lit search surfaces:

  1. Propositional vs. Transformational Communication
  2. Informational vs. Performative Language
  3. Cognitive vs. Experiential Modes

4 Argumentative Theory of Reason

Henry Farrell on Mercier and Sperber:

First — that reasoning has not evolved in the ways that we think it has — as a process of ratiocination that is intended independently to figure out the world. Instead, it has evolved as a social capacity — as a means to justify ourselves to others. We want something to be so, and we use our reasoning capacity to figure out plausible-seeming reasons to convince others that it should be so. However (and this is the main topic of a more recent book by Hugo (Mercier 2020)), together with our capacity to generate plausible-sounding rationales, we have a decent capacity to detect when others are bullshitting us. In combination, these mean that we are more likely to be closer to the truth when we are trying to figure out why others may be wrong, than when we are trying to figure out why we ourselves are right. … We need negative criticisms from others since they lead us to understand weaknesses in our arguments that we are incapable of coming at ourselves, without them being pointed out to us.

The Argumentative Theory of Reasoning proposes that human reasoning evolved not primarily for individual problem-solving or truth-seeking, but as a means of argumentation—specifically to persuade others and to evaluate arguments in social contexts. Highlights:

  • Evolutionary perspective: Reasoning developed as an adaptation for social living, where cooperation and persuasion play vital roles.
  • Persuasion: It helps us develop and present arguments that convince others, strengthening group cohesion and collective decision-making.
  • Bias and motivated reasoning: Our cognitive biases (confirmation bias, myside bias) aren’t evolutionary bugs but features that support argumentation—helping us build cases for our viewpoints.
  • Group benefits: Although reasoning is often biased at the individual level, debate and critique within groups can lead to better collective outcomes, because poor arguments get challenged and strong ones get refined.

5 Do we even need symbols?

Lanier (2010) proposes “post-symbolic communication” as communication that exists beyond the symbolic forms that modernity’s legibility favours, and I suppose the “pre-symbolic communication” might exist in the metis regime.

Suppose we had the ability to morph at will, as fast as we can think. What sort of language might that make possible? Would it be the same old conversation, or would we be able to “say” new things to one another?

For instance, instead of saying, “I’m hungry; let’s go crab hunting,” you might simulate your own transparency so your friends could see your empty stomach, or you might turn into a video game about crab hunting so you and your compatriots could get in a little practice before the actual hunt.

I call this possibility “post-symbolic communication.” It can be a hard idea to think about, but I find it enormously exciting. It would not suggest an annihilation of language as we know it—symbolic communication would continue to exist—but it would give rise to a vivid expansion of meaning.

This is an extraordinary transformation that people might someday experience. We’d then have the option of cutting out the “middleman” of symbols and directly creating shared experience. A fluid kind of concreteness might turn out to be more expressive than abstraction.

In the domain of symbols, you might be able to express a quality like “redness.” In post-symbolic communication, you might come across a red bucket. Pull it over your head, and you discover that it is cavernous on the inside. Floating in there is every red thing: there are umbrellas, apples, rubies, and droplets of blood. The red within the bucket is not Plato’s eternal red. It is concrete. You can see for yourself what the objects have in common. It’s a new kind of concreteness that is as expressive as an abstract category.

This is perhaps a dry and academic-sounding example. I also don’t want to pretend I understand it completely. Fluid concreteness would be an entirely new expressive domain. It would require new tools, or instruments, so that people could achieve it.

I imagine a virtual saxophone-like instrument in virtual reality with which I can improvise both golden tarantulas and a bucket with all the red things. If I knew how to build it now, I would, but I don’t.

I consider it a fundamental unknown whether it is even possible to build such a tool in a way that would actually lift the improviser out of the world of symbols. Even if you used the concept of red in the course of creating the bucket of all red things, you wouldn’t have accomplished this goal.

I spend a lot of time on this problem. I am trying to create a new way to make software that escapes the boundaries of preexisting symbol systems. This is my phenotropic project.

The point of the project is to find a way of making software that rejects the idea of the protocol. Instead, each software module must use emergent generic pattern-recognition techniques—similar to the ones I described earlier, which can recognise faces—to connect with other modules. Phenotropic computing could potentially result in a kind of software that is less tangled and unpredictable, since there wouldn’t be protocol errors if there weren’t any protocols. It would also suggest a path to escaping the prison of predefined, locked-in ontologies like MIDI in human affairs.

I’m not convinced — I might explain why at some point.

6 Naive free speech advocacy treats language as symbolic

TBC

7 Incoming

8 References

Adolphs. 2009. The Social Brain: Neural Basis of Social Knowledge.” Annual Review of Psychology.
Ajduković. 2007. Attitude change and need for cognition in debaters and non-debaters.”
Baronchelli, Gong, Puglisi, et al. 2010. Modeling the emergence of universality in color naming patterns.” Proceedings of the National Academy of Sciences of the United States of America.
Barrett, and Henzi. 2005. “The Social Nature of Primate Cognition.” Proceedings of the Royal Society B: Biological Sciences.
Barrett, Henzi, and Rendall. 2006. Social Brains, Simple Minds: Does Social Complexity Really Require Cognitive Complexity? Philosophical Transactions of the Royal Society B: Biological Sciences.
Cancho, and Solé. 2003. Least Effort and the Origins of Scaling in Human Language.” Proceedings of the National Academy of Sciences.
Cao, Lazaridou, Lanctot, et al. 2018. Emergent Communication Through Negotiation.”
Chaabouni, Kharitonov, Dupoux, et al. 2019. Anti-Efficient Encoding in Emergent Communication.” In Advances in Neural Information Processing Systems.
Chen, Martínez, and Cheng. 2018. The Developmental Origins of the Social Brain: Empathy, Morality, and Justice.” Frontiers in Psychology.
Christiansen, and Chater. 2008. Language as Shaped by the Brain.” Behavioral and Brain Sciences.
Cook, Lewandowsky, and Ecker. 2017. Neutralizing Misinformation Through Inoculation: Exposing Misleading Argumentation Techniques Reduces Their Influence.” PLOS ONE.
Cosmides, and Tooby. 1992. Cognitive Adaptations for Social Exchange.” The Adapted Mind: Evolutionary Psychology and the Generation of Culture.
Costello, Pennycook, and Rand. 2024. Durably Reducing Conspiracy Beliefs Through Dialogues with AI.” Science.
Dunbar. 1993. Coevolution of Neocortex Size, Group Size and Language in Humans.” Behavioral and Brain Sciences.
Galesic, Barkoczi, Berdahl, et al. 2022. Beyond Collective Intelligence: Collective Adaptation.”
Havrylov, and Titov. 2017. “Emergence of Language with Multi-Agent Games: Learning to Communicate with Sequences of Symbols.”
Hoppitt, and Laland. 2013. Social Learning: An Introduction to Mechanisms, Methods, and Models.
Jiang, and Lu. 2018. Learning Attentional Communication for Multi-Agent Cooperation.” In Advances in Neural Information Processing Systems.
Kim. 2015. Does Disagreement Mitigate Polarization? How Selective Exposure and Disagreement Affect Political Polarization.” Journalism & Mass Communication Quarterly.
Köster, Hadfield-Menell, Everett, et al. 2022. Spurious Normativity Enhances Learning of Compliance and Enforcement Behavior in Artificial Agents.” Proceedings of the National Academy of Sciences.
Laland. 2004. Social Learning Strategies.” Animal Learning & Behavior.
Lanier. 2010. You Are Not a Gadget: A Manifesto.
Lian, Bisazza, and Verhoef. 2021. The Effect of Efficient Messaging and Input Variability on Neural-Agent Iterated Language Learning.”
Lieberman. 2013. Social: Why Our Brains Are Wired to Connect.
Mercier. 2020. Not Born Yesterday: The Science of Who We Trust and What We Believe.
Mercier, and Sperber. 2011a. Argumentation: Its Adaptiveness and Efficacy.” Behavioral and Brain Sciences.
———. 2011b. Why Do Humans Reason? Arguments for an Argumentative Theory.” Behavioral and Brain Sciences.
———. 2017. The Enigma of Reason.
Molapour, Hagan, Silston, et al. 2021. Seven Computations of the Social Brain.” Social Cognitive and Affective Neuroscience.
Peregrin. 2017. Meaning and Structure: Structuralism of (Post)Analytic Philosophers.
Petersson, Folia, and Hagoort. 2012. What Artificial Grammar Learning Reveals about the Neurobiology of Syntax.” Brain and Language, The Neurobiology of Syntax,.
Peysakhovich, and Lerer. 2017. Prosocial Learning Agents Solve Generalized Stag Hunts Better Than Selfish Ones.”
Reddy. 1993. The Conduit Metaphor: A Case of Frame Conflict in Our Language about Language.” In Metaphor and Thought.
Resnick, Gupta, Foerster, et al. 2020. Capacity, Bandwidth, and Compositionality in Emergent Language Learning.”
Salvi, Ribeiro, Gallotti, et al. 2024. On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial.”
Sperber, and Mercier. 2012. Reasoning as a Social Competence.” In Collective Wisdom.
Steyvers, and Tenenbaum. 2005. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.” Cognitive Science.
Tan, Niculae, Danescu-Niculescu-Mizil, et al. 2016. Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-Faith Online Discussions.” In Proceedings of the 25th International Conference on World Wide Web. WWW ’16.
Trouche, Sander, and Mercier. 2014. Arguments, More Than Confidence, Explain the Good Performance of Reasoning Groups.” SSRN Scholarly Paper ID 2431710.
Tucker, Li, Agrawal, et al. 2021. Emergent Discrete Communication in Semantic Spaces.” In Advances in Neural Information Processing Systems.
van der Post, Franz, and Laland. 2016. Skill Learning and the Evolution of Social Learning Mechanisms.” BMC Evolutionary Biology.