Learning to act in generative settings

On the formal theory of choosing to do that which you’ve never seen done before

2025-10-06 — 2025-10-06

Wherein two paradigms are surveyed, optimizing agents and replicating persisters are contrasted, intrinsic drives such as curiosity and empowerment are proposed as bridges, and open‑ended generators like POET are described.

adaptive
agents
energy
evolution
extended self
game theory
gene
incentive mechanisms
learning
mind
networks
probability
statistics
statmech
utility
wonk
Figure 1

Learning to act in generative action sets. The theory of how learning to act in an open world with generative models.

1 Generative Models as RL Policies

  • Janner et al. (2022) → First work treating diffusion models as control policies; inspired “Diffusion-DICE”.
  • Chen et al. (2021) → Pioneered autoregressive policy generation conditioned on rewards and trajectories.
  • Lu et al. (2023) → Introduced energy-based control of diffusion policies (bridge to goal-conditioned RL).
  • Mao et al. (2024) → The canonical “generative action set” model: combines denoising diffusion with dual-form RL.

2 Generative Models as World Models

Should look that up too

3 Dual & Contrastive Formulations

  • *Nachum and Dai (2020) → Mathematical foundation for the DICE family of dual generative RL objectives.
  • *Sikchi et al. (2025). → Unifies DICE and generative imitation through f-divergence matching.
  • Eysenbach et al. (2022) → Connects InfoNCE and successor features — key to generative goal-reaching.
  • Blier, Tallec, and Ollivier (2021) → Formal basis for contrastive successor measures and generative planning.

4 Goal-Conditioned and Generative Action RL

  • Eysenbach, Salakhutdinov, and Levine (2021) — → Reframes unsupervised skill discovery as a generative manifold problem.
  • Ghosh et al. (2020) → Empirical foundations for goal-conditioned generative imitation.
  • Liu, Tang, and Eysenbach (2024) → Shows emergent skill learning purely from generative contrastive objectives.

5 Self-Generating Feedback and Exploration

  • *Klyubin, Polani, and Nehaniv (2005) connects us to empowerment. → Origin of intrinsic-motivation-as-generative-exploration.
  • Eysenbach et al. (2018)DIAYN: Diversity Is All You Need. → Canonical “self-generative” skill-discovery paper using MI-based objectives.
  • Sikchi et al. (2025) — → Defines RLZero: language-conditioned generative action sets* and zero-shot policy generation.

6 Incoming

Abstract: This tutorial explores the intersection of generative AI and reinforcement learning, demonstrating how generative models can be understood as RL agents and environments, and conversely, how RL can be viewed as generative modeling. It aims to bridge the gap between these fields, showing how insights from each can enhance the other. The workshop will cover topics such as reinterpreting generative AI training through an RL lens, adapting generative AI to build new RL algorithms, and understanding how AI agents interacting with tools and humans create a new generative model. It will also discuss future directions and open problems, focusing on how RL can shape the future of foundation model training and enable generative AI systems to construct their own knowledge.

Xu et al. (2025):

With advances in generative AI, decision-making agents can now dynamically create new actions during online learning, but action generation typically incurs costs that must be balanced against potential benefits. We study an online learning problem where an agent can generate new actions at any time step by paying a one-time cost, with these actions becoming permanently available for future use. The challenge lies in learning the optimal sequence of two-fold decisions: which action to take and when to generate new ones, further complicated by the triangular tradeoffs among exploitation, exploration and \(\textit{creation}\). To solve this problem, we propose a doubly-optimistic algorithm that employs Lower Confidence Bounds (LCB) for action selection and Upper Confidence Bounds (UCB) for action generation. Empirical evaluation on healthcare question-answering datasets demonstrates that our approach achieves favorable generation-quality tradeoffs compared to baseline strategies. From theoretical perspectives, we prove that our algorithm achieves the optimal regret of \(O(T^{\frac{d}{d+2}}d^{\frac{d}{d+2}} + d\sqrt{T\log T})\), providing the first sublinear regret bound for online learning with expanding action spaces.

7 References

Ajay, Du, Gupta, et al. 2023. Is Conditional Generative Modeling All You Need for Decision-Making? In.
Blier, Tallec, and Ollivier. 2021. Learning Successor States and Goal-Dependent Values: A Mathematical Viewpoint.”
Chen, Lu, Rajeswaran, et al. 2021. Decision Transformer: Reinforcement Learning via Sequence Modeling.” In Advances in Neural Information Processing Systems.
Du, Kosoy, Dayan, et al. 2023. What Can AI Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration.” In.
Eysenbach, Gupta, Ibarz, et al. 2018. Diversity Is All You Need: Learning Skills Without a Reward Function.” In.
Eysenbach, Salakhutdinov, and Levine. 2021. The Information Geometry of Unsupervised Reinforcement Learning.” In.
Eysenbach, Zhang, Levine, et al. 2022. Contrastive Learning as Goal-Conditioned Reinforcement Learning.” In NeurIPS.
Ghosh, Gupta, Reddy, et al. 2020. Learning to Reach Goals via Iterated Supervised Learning.” In.
Janner, Du, Tenenbaum, et al. 2022. Planning with Diffusion for Flexible Behavior Synthesis.” In Proceedings of the 39th International Conference on Machine Learning.
Klyubin, Polani, and Nehaniv. 2005. All Else Being Equal Be Empowered.” In Advances in Artificial Life.
Liu, Tang, and Eysenbach. 2024. A Single Goal Is All You Need: Skills and Exploration Emerge from Contrastive RL Without Rewards, Demonstrations, or Subgoals.” In International Conference on Learning Representations.
Lu, Chen, Chen, et al. 2023. Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning.”
Mao, Xu, Zhan, et al. 2024. Diffusion-DICE: In-Sample Diffusion Guidance for Offline Reinforcement Learning.” In NeurIPS.
Nachum, and Dai. 2020. Reinforcement Learning via Fenchel-Rockafellar Duality.” In.
Ringstrom. 2022. Reward Is Not Necessary: How to Create a Compositional Self-Preserving Agent for Life-Long Learning.”
Schmidhuber. 2010. Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010).” IEEE Transactions on Autonomous Mental Development.
Sikchi, Agarwal, Jajoo, et al. 2025. RLZero: Direct Policy Inference from Language Without In-Domain Supervision.” In ICLR.
Steinberg, Oliveira, Ong, et al. 2024. Variational Search Distributions.”
Xu, Jain, Wilder, et al. 2025. Online Decision Making with Generative Action Sets.”