Gradient steps to an ecology of mind
Regularised survival of the fittest
2011-11-27 — 2025-09-03
Wherein the social roots of consciousness are examined, the impact of compute and data asymmetries on equilibria between other‑modelling agents is considered, and cultural patterns such as altruistic punishment are noted.
You know that you are not immortal. You should know that an infinity of time is necessary for the acquirement of infinite knowledge; and that your span of life will be just as short, in comparison with your capacity to live and to learn, as that of Homo Sapiens. When the time comes you will want to—you will need to—change your manner of living. — Children of the Lense, E. E. “Doc” Smith.
At social brain I wonder how we (humans) behave socially and evolutionarily. Here I ponder if consciousness is intrinsically social, and whether non-social intelligences need, or are likely to have, consciousness. What ethics will they execute on their moral wetware? cf multi-agent systems.
Related: what is consciousness? Do other minds possess “self”? Do they care about their own survival? Does selfhood evolve only in evolutionary contexts, in an ecosystem of interacting agents of similar power? Is consciousness even that great anyway?
My model of what we value in human interaction is generalized cooperation, made possible by our inability to be optimal EV-maximizers. Instead of needing enforceable commitments and perfect models, we have noisy, imperfect models of each other, which can lead to locally inefficient but globally interesting outcomes. For example, I live in a world with many interesting features that do not seem Expected-Value-optimal in any easy-to-define way but which I think are an important part of the human experience that cannot be reproduced in a society of Molochian utility optimizers.
Examples:
- We run prisons, which are expensive altruistic punishments against an out-group.
- At the same time, we have a society that somehow fosters occasional extreme out-group cooperation; for example, my childhood was full of pro-refugee rallies, for which attendees could expect no possible gain and which are not easy to explain in terms of myopic kin-selection/selfish genes or in terms of Machiavellian EV coordination.
Basically, I think a lot of interesting cultural patterns can free-ride on our inability to optimize for EV, individually or collectively. Trying to cash out “failure to optimize for EV” in a utility function seems ill-posed. All of which is to say that I suspect if we optimize only for EV, we probably lose anything that is recognizably human. Is that bad? It seems so to me, but maybe that’s just a parochially human thing to say. And yet, for whom would that expected value be valuable?
What do we strive for, then, if not the utility of the utilitarian? What is the mind that places great importance on outcomes for other minds?
1 What equilibria are possible between other-modeling agents?
Suppose we are world modelling agents, and in particular we are minds, because we need to model other minds — that’s the most complicated part of the world. I think this recursive definition is basically how humans work, in some way that I’d love to be able to make precise.
We could ask other questions like: Is subjective continuity a handy way to get entities to invest in their own persistence? Is that what consciousness is?
Those questions are for later, and honestly, preferably for someone else to answer — I find all this interest in consciousness baffling and slightly tedious.
For now, let’s just say I think that existing in some kind of cognitive equilibrium with near-peers is central to the human experience, and I want to figure out if and how this hypothetical equilibrium is real and, if so, how it gets disrupted by superhuman information processing agents.
If so, would minds “like ours” be stable orbits in the trajectory of modern compute? Subsidiary question: are epistemic communities like ours stable orbits in the trajectory of modern compute?
There are two constraints we need to consider to capture how well one agent can model another.
- compute. How sophisticated is the model’s inference?
- data. How much data does the model have?
Both matter because digital compute usually has a lot more of both than humans do, and the asymmetries could be crucial if their effects differ. There are other details, like good algorithms, that I’m happy to handwave away for now, a bit of Marcus Hutter.
TBC
2 Fitness function versus utility function
3 Incoming
Gordon Brander, Co-evolution creates living complexity
PIBBSS – Principles of Intelligent Behavior in Biological and Social Systems
Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) is a research initiative aiming to leverage insights on the parallels between intelligent behaviour in natural and artificial systems towards progress on important questions in AI risk, governance and safety.
We run a number of programs to facilitate this type of research, support talent and build a strong research network around this epistemic approach.
How have I not heard of this mob? Their reading list looks like my Santa Fe Institute-flavoured-complex-systems undergraduate degree all over again.