Learning to act in generative settings
On the formal theory of choosing to do that which you’ve never seen done before
2025-10-06 — 2025-10-06
Wherein two paradigms are surveyed, optimizing agents and replicating persisters are contrasted, intrinsic drives such as curiosity and empowerment are proposed as bridges, and open‑ended generators like POET are described.
Learning to act in generative action sets. The theory of how learning to act in an open world with generative models.
1 Generative Models as RL Policies
- Janner et al. (2022) → First work treating diffusion models as control policies; inspired “Diffusion-DICE”.
- Chen et al. (2021) → Pioneered autoregressive policy generation conditioned on rewards and trajectories.
- Lu et al. (2023) → Introduced energy-based control of diffusion policies (bridge to goal-conditioned RL).
- Mao et al. (2024) → The canonical “generative action set” model: combines denoising diffusion with dual-form RL.
2 Generative Models as World Models
Should look that up too
3 Dual & Contrastive Formulations
- *Nachum and Dai (2020) → Mathematical foundation for the DICE family of dual generative RL objectives.
- *Sikchi et al. (2025). → Unifies DICE and generative imitation through f-divergence matching.
- Eysenbach et al. (2022) → Connects InfoNCE and successor features — key to generative goal-reaching.
- Blier, Tallec, and Ollivier (2021) → Formal basis for contrastive successor measures and generative planning.
4 Goal-Conditioned and Generative Action RL
- Eysenbach, Salakhutdinov, and Levine (2021) — → Reframes unsupervised skill discovery as a generative manifold problem.
- Ghosh et al. (2020) → Empirical foundations for goal-conditioned generative imitation.
- Liu, Tang, and Eysenbach (2024) → Shows emergent skill learning purely from generative contrastive objectives.
5 Self-Generating Feedback and Exploration
- *Klyubin, Polani, and Nehaniv (2005) connects us to empowerment. → Origin of intrinsic-motivation-as-generative-exploration.
- Eysenbach et al. (2018) — DIAYN: Diversity Is All You Need. → Canonical “self-generative” skill-discovery paper using MI-based objectives.
- Sikchi et al. (2025) — → Defines RLZero: language-conditioned generative action sets* and zero-shot policy generation.
6 Incoming
Abstract: This tutorial explores the intersection of generative AI and reinforcement learning, demonstrating how generative models can be understood as RL agents and environments, and conversely, how RL can be viewed as generative modeling. It aims to bridge the gap between these fields, showing how insights from each can enhance the other. The workshop will cover topics such as reinterpreting generative AI training through an RL lens, adapting generative AI to build new RL algorithms, and understanding how AI agents interacting with tools and humans create a new generative model. It will also discuss future directions and open problems, focusing on how RL can shape the future of foundation model training and enable generative AI systems to construct their own knowledge.
Xu et al. (2025):
With advances in generative AI, decision-making agents can now dynamically create new actions during online learning, but action generation typically incurs costs that must be balanced against potential benefits. We study an online learning problem where an agent can generate new actions at any time step by paying a one-time cost, with these actions becoming permanently available for future use. The challenge lies in learning the optimal sequence of two-fold decisions: which action to take and when to generate new ones, further complicated by the triangular tradeoffs among exploitation, exploration and \(\textit{creation}\). To solve this problem, we propose a doubly-optimistic algorithm that employs Lower Confidence Bounds (LCB) for action selection and Upper Confidence Bounds (UCB) for action generation. Empirical evaluation on healthcare question-answering datasets demonstrates that our approach achieves favorable generation-quality tradeoffs compared to baseline strategies. From theoretical perspectives, we prove that our algorithm achieves the optimal regret of \(O(T^{\frac{d}{d+2}}d^{\frac{d}{d+2}} + d\sqrt{T\log T})\), providing the first sublinear regret bound for online learning with expanding action spaces.