Open ended intelligence
Need is all you need
2022-11-27 — 2025-09-25
Wherein two paradigms are surveyed, optimizing agents and replicating persisters are contrasted, intrinsic drives such as curiosity and empowerment are proposed as bridges, and open‑ended generators like POET are described.
1 Two paradigms of adaptation
This is a placeholder to talk about entities that try to ‘do well’ by simply continuing to exist. How is the loss function of an optimizer related to the notional fitness function of an evolutionary entity? “Entities that optimize for goals, above all,” versus “entities that replicate and persist, above all.”
These are two different paradigms for adaptive entities: optimizing (what our algorithms usually aim for) and persisting (what evolution produces).
Rather than being born with a single overriding goal encoded in a loss function that ranks states as better or worse, we evolutionary entities are messier. We have a deep drive to survive and also a desire to succeed while alive, where succeeding seems to be a somewhat adjustable criterion and might include being “happy”, “good”, “successful”, “loved”, “powerful”, “wise”, “free”, “just”, “beautiful”, “funny”, “interesting”, “creative”, “kind”, “rich” or “righteous”. Or whatever.
Optimized and evolved entities are both present in the world. Usually we think of surviving as the domain of life, and optimizing as the domain of machines, although the line is fuzzy thanks to genetic programming and self-optimizing nerds. Maybe that’s why machines seem so utterly alien to us. As an evolutionary replicator myself, I tend to fear optimizers, and I wonder how my interests can actually align with theirs.
There are newer non-optimizing paradigms for AI (Lehman and Stanley 2011; Ringstrom 2022), and I wonder whether they can do anything useful.
Cf. Arcas et al. (2024), which suggests that replicating sometimes emerges naturally from machines.
2 Intrinsic motivation models
One way to bridge the gap between pure optimizers and pure persisters is through intrinsic motivation. Instead of waiting for a sparse external signal (reward, loss, profit), an agent generates its own signals: curiosity, play, empowerment, surprise, novelty.
- Curiosity encourages agents to seek states that reduce uncertainty or maximize prediction error.
- Play is a form of practice without explicit external reward, but which builds flexible repertoires of behaviour.
- Empowerment (as discussed earlier) motivates an agent to maintain future optionality by staying in states with many possible futures.
- Novelty search abandons external performance measures altogether, rewarding only the discovery of novel behaviour (Lehman and Stanley 2011).
Recent work explores how such motivations can scale: intrinsically motivated deep RL (Du et al. 2023), empowerment approximations (Lidayan et al. 2025), models of complex curiosity (Ramírez-Ruiz et al. 2024), and even formal theories of curiosity (Schmidhuber 2010). These paradigms don’t abolish optimization, but they re-anchor it in something closer to evolutionary persistence — maintaining flexibility, exploration, and continued existence.
3 Open-endedness
This brings us to a more grandiose question: what does it mean for a system to be “open-ended”?
Jeff Clune posed a version of this question at EXAIT:
Could we devise an open-ended exploratory algorithm that is worth running for a billion years?
This isn’t about solving a single benchmark or reaching a single target loss. It’s about building processes that never finish and that continually create novelty, complexity, and surprise. That’s what life itself appears to be building.
Researchers have started sketching pathways:
- POET: endlessly generates new environments and their solutions in tandem (Wang et al. 2019, 2020; Ecoffet et al. 2021).
- Quality-diversity algorithms: not just about finding a single optimum but about filling out the space of possible strategies (Cully et al. 2015).
- OMNI-EPIC: links human notions of interestingness with programmatically generated environments (Faldor et al. 2024). See also maxencefaldor/omni-epic
- Broader frameworks in AI-GA (AI generating algorithms) (Clune 2020).
These approaches move away from a single-point optimization worldview toward something more like evolution: messy, self-propagating, self-diversifying, and driven by the twin imperatives of persistence and exploration.
Is that what life is building?