Economics of foundation models
Microeconomic of compute
2023-03-23 — 2025-05-27
Wherein economies of foundation models are examined and the disproportionate energy and water demands of large-scale training, including data‑centre cooling and emissions accounting, are described.
1 In epistemic communities and public discourse
Spamularity, dark forest, textpocalypse? See Spamularity.
2 PR, hype, marketing
George Hosu, in a short aside, highlights the incredible marketing advantage of AI:
People that failed to lift a finger to integrate better-than-doctors or work-with-doctors supervised medical models for half a century are stoked at a chatbot being as good as an average doctor and can’t wait to get it to triage patient
3 Democratisation of AI
4 Art and creativity
For now, see timeless works of art.
5 Data sovereignty
See data sovereignty.
6 AI tech soap opera
7 Material basis of AI compute?
Energy, water, minerals etc See material basis of AI economics.
8 Incoming
Brain Circulation: How High-Skill Immigration Makes Everyone Better Off
Ilya Sutskever: “Sequence to sequence learning with neural networks: what a decade”
Why Quora isn’t useful anymore: A.I. came for the best site on the internet.
Gradient Dissent, a list of reasons that large backpropagation-trained networks might be worrisome. Some interesting points in there, and some hyperbole. Also: If it were true that externalities come from backprop networks (i.e. that they are a kind of methodological pollution that produces private benefits but public costs) then what kind of mechanisms should disincentivise them?
-
In this post, we evaluate whether major foundation model providers currently comply with these draft requirements and find that they largely do not. Foundation model providers rarely disclose adequate information regarding the data, compute, and deployment of their models as well as the key characteristics of the models themselves. In particular, foundation model providers generally do not comply with draft requirements to describe the use of copyrighted training data, the hardware used and emissions produced in training, and how they evaluate and test models. As a result, we recommend that policymakers prioritise transparency, informed by the AI Act’s requirements. Our assessment demonstrates that it is currently feasible for foundation model providers to comply with the AI Act, and that disclosure related to foundation models’ development, use, and performance would improve transparency in the entire ecosystem.
I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale
Invasive Diffusion: How one unwilling illustrator found herself turned into an AI model
Bruce Schneier, On the Need for an AI Public Option
I Went to the Premiere of the First Commercially Streaming AI-Generated Movies
Lower AI Costs Will Drive Innovation, Efficiency, and Adoption
Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous (not convinced tbh)