Draft

Epistemic bottlenecks

August 24, 2021 — September 13, 2022

adaptive
agents
bounded compute
classification
collective knowledge
communicating
distributed
economics
evolution
how do science
incentive mechanisms
information
institutions
language
learning
mind
networks
social graph
sociology
standards
stringology
virality
Figure 1

Is the Bitter lesson about minimising transmission costs?

What is transmissibility of knowledge? What is knowledge about? Can an LLM teach? (apparently yes?) Can an LLM teach LLMs? (apparently yes?)

Figure 2

1 Do we even need symbols?

Lanier (2010) has a notion about “post-symbolic communication” as something that exists beyond the symbolic communication that modernity’s legibility favours, and I suppose the “pre-symbolic communication” possibly in the metis regime.

Suppose we had the ability to morph at will, as fast as we can think. What sort of language might that make possible? Would it be the same old conversation, or would we be able to “say” new things to one another?

For instance, instead of saying, “I’m hungry; let’s go crab hunting,” you might simulate your own transparency so your friends could see your empty stomach, or you might turn into a video game about crab hunting so you and your compatriots could get in a little practice before the actual hunt.

I call this possibility “post symbolic communication.” It can be a hard idea to think about, but I find it enormously exciting. It would not suggest an annihilation of language as we know it—symbolic communication would continue to exist—but it would give rise to a vivid expansion of meaning.

This is an extraordinary transformation that people might someday experience. We’d then have the option of cutting out the “middleman” of symbols and directly creating shared experience. A fluid kind of concreteness might turn out to be more expressive than abstraction.

In the domain of symbols, you might be able to express a quality like “redness.” In postsymbolic communication, you might come across a red bucket. Pull it over your head, and you discover that it is cavernous on the inside. Floating in there is every red thing: there are umbrellas, apples, rubies, and droplets of blood. The red within the bucket is not Plato’s eternal red. It is concrete. You can see for yourself what the objects have in common. It’s a new kind of concreteness that is as expressive as an abstract category.

This is perhaps a dry and academic-sounding example. I also don’t want to pretend I understand it completely. Fluid concreteness would be an entirely new expressive domain. It would require new tools, or instruments, so that people could achieve it.

I imagine a virtual saxophone-like instrument in virtual reality with which I can improvise both golden tarantulas and a bucket with all the red things. If I knew how to build it now, I would, but I don’t.

I consider it a fundamental unknown whether it is even possible to build such a tool in a way that would actually lift the improviser out of the world of symbols. Even if you used the concept of red in the course of creating the bucket of all red things, you wouldn’t have accomplished this goal.

I spend a lot of time on this problem. I am trying to create a new way to make software that escapes the boundaries of preexisting symbol systems. This is my phenotropic project.

The point of the project is to find a way of making software that rejects the idea of the protocol. Instead, each software module must use emergent generic pattern-recognition techniques—similar to the ones I described earlier, which can recognize faces—to connect with other modules. Phenotropic computing could potentially result in a kind of software that is less tangled and unpredictable, since there wouldn’t be protocol errors if there weren’t any protocols. It would also suggest a path to escaping the prison of predefined, locked-in ontologies like MIDI in human affairs.

I am not convinced, for reasons I might go into at some point.

2 References

Baronchelli, Gong, Puglisi, et al. 2010. Modeling the emergence of universality in color naming patterns.” Proceedings of the National Academy of Sciences of the United States of America.
Cancho, and Solé. 2003. Least Effort and the Origins of Scaling in Human Language.” Proceedings of the National Academy of Sciences.
Cao, Lazaridou, Lanctot, et al. 2018. Emergent Communication Through Negotiation.”
Chaabouni, Kharitonov, Dupoux, et al. 2019. Anti-Efficient Encoding in Emergent Communication.” In Advances in Neural Information Processing Systems.
———, et al. 2021. Communicating Artificial Neural Networks Develop Efficient Color-Naming Systems.” Proceedings of the National Academy of Sciences.
Christiansen, and Chater. 2008. Language as Shaped by the Brain.” Behavioral and Brain Sciences.
Corominas-Murtra, and Solé. 2010. Universality of Zipf’s Law.” Physical Review E.
Galesic, Barkoczi, Berdahl, et al. 2022. Beyond Collective Intelligence: Collective Adaptation.”
Gozli. 2023. Principles of Categorization: A Synthesis.” Seeds of Science.
Havrylov, and Titov. 2017. “Emergence of Language with Multi-Agent Games: Learning to Communicate with Sequences of Symbols.”
Jiang, and Lu. 2018. Learning Attentional Communication for Multi-Agent Cooperation.” In Advances in Neural Information Processing Systems.
Lanier. 2010. You Are Not a Gadget: A Manifesto.
Lian, Bisazza, and Verhoef. 2021. The Effect of Efficient Messaging and Input Variability on Neural-Agent Iterated Language Learning.”
Loreto, Mukherjee, and Tria. 2012. On the Origin of the Hierarchy of Color Names.” Proceedings of the National Academy of Sciences of the United States of America.
Lowe, Foerster, Boureau, et al. 2019. On the Pitfalls of Measuring Emergent Communication.”
O’Connor. 2017. Evolving to Generalize: Trading Precision for Speed.” British Journal for the Philosophy of Science.
Petersson, Folia, and Hagoort. 2012. What Artificial Grammar Learning Reveals about the Neurobiology of Syntax.” Brain and Language, The Neurobiology of Syntax,.
Peysakhovich, and Lerer. 2017. Prosocial Learning Agents Solve Generalized Stag Hunts Better Than Selfish Ones.”
Resnick, Gupta, Foerster, et al. 2020. Capacity, Bandwidth, and Compositionality in Emergent Language Learning.”
Smith. 2022. The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning.
Steyvers, and Tenenbaum. 2005. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.” Cognitive Science.
Tucker, Li, Agrawal, et al. 2021. Emergent Discrete Communication in Semantic Spaces.” In Advances in Neural Information Processing Systems.
Weisbuch, Deffuant, Amblard, et al. 2002. Meet, Discuss, and Segregate! Complexity.