Building AI Agents

2025-02-02 — 2025-06-07

Wherein the emergence of multi‑agent scaffolds and factored cognition is presented, and frameworks for agent interoperability, including Agent Laboratory and the Agent2Agent protocol, are surveyed.

AI safety
computers are awful together
faster pussycat
language
machine learning
neural nets
NLP
premature optimization
technology
UI
Figure 1

Placeholder while I mull over the practicalities and theory of AI agents.

See also Multi-agent systems.

1 Factored cognition

Is factored cognition a field of study, or a company’s marketing term?

Reference: Factored Cognition | Ought.

In this project, we explore whether we can solve difficult problems by composing small and mostly context-free contributions from individual agents who don’t know the big picture.

2 Incoming

3 References

Bengio, Cohen, Fornasiere, et al. 2025. Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?
Carey, Langlois, Merwijk, et al. 2025. Incentives for Responsiveness, Instrumental Control and Impact.”
Chen, Dong, Shu, et al. 2023. AutoAgents: A Framework for Automatic Agent Generation.”
Crutchfield, and Jurgens. 2025. Agentic Information Theory: Ergodicity and Intrinsic Semantics of Information Processes.”
Everitt, Garbacea, Bellot, et al. 2025. Evaluating the Goal-Directedness of Large Language Models.”
Guo, Chen, Wang, et al. 2024. Large Language Model Based Multi-Agents: A Survey of Progress and Challenges.”
Hammond, Chan, Clifton, et al. 2025. Multi-Agent Risks from Advanced AI.”
Hyland, Gavenčiak, Costa, et al. 2024. Free-Energy Equilibria: Toward a Theory of Interactions Between Boundedly-Rational Agents.” In.
Kalai, and Lehrer. 1993. Rational Learning Leads to Nash Equilibrium.” Econometrica.
Li, Al Kader Hammoud, Itani, et al. 2023. CAMEL: Communicative Agents for ‘Mind’ Exploration of Large Language Model Society.” In Proceedings of the 37th International Conference on Neural Information Processing Systems. NIPS ’23.
Qu, Dai, Wei, et al. 2025. Tool Learning with Large Language Models: A Survey.” Front. Comput. Sci.
Rosser, and Foerster. 2025. AgentBreeder: Mitigating the AI Safety Impact of Multi-Agent Scaffolds via Self-Improvement.”
Schmidgall, Su, Wang, et al. 2025. Agent Laboratory: Using LLM Agents as Research Assistants.”
Walters, Kaufmann, Sefas, et al. 2025. Free Energy Risk Metrics for Systemically Safe AI: Gatekeeping Multi-Agent Study.”
Wu, Bansal, Zhang, et al. 2023. AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation.”