Economics of cognitive and labour automation
Practicalities of competition between ordinary schlubs, and machines which can tirelessly collage all of history’s greatest geniuses at once
2021-09-20 — 2026-04-04
Wherein the automation of cognitive labour by large language models is surveyed, and a consequent shift in organisational incentives from open knowledge toward the private sequestration of resources is noted.
The Economics of automation applied to AI.
I am actively researching this topic at the moment and accordingly these notes are absolute chaos.
WIP.
1 At the scale of the economy
There are many models. Here’s one I’m sceptical of:
Erusian and Doug Summers-Stay, Will Automation Lead to Economic Crisis?
tl;dr: Until the pace of automation increases faster than new jobs can be created, AI shouldn’t be expected to cause mass unemployment or anything like that. When AI can pick up a new job as quickly and cheaply as a person can, then the economy will break (but everything else will break too, because that would be the Singularity).
2 Collective intelligence and the epistemic commons
How do foundation models change the economics of knowledge production, incentives for open publishing, and the long-run stock of collective knowledge? This has become a big enough topic to warrant its own page: Knowledge collapse and the epistemic commons.
3 Returns to scale for frontier model developers
Deepseek is a Chinese company rather than a community. But they seem to be changing the game in terms of cost, accessibility, and openness of AI models. TBD.
Leaked Google document: “We Have No Moat, And Neither Does OpenAI” asserts that large corporations are concerned that LLMs do not provide sufficient return on capital.
4 Organizational behaviour
How LLMs shift organizational incentives from skill development toward resource control and gatekeeping — career moats, knowledge privatization, and the consequent erosion of open documentation. See knowledge collapse and organizational knowledge dynamics.
5 Darwin-Pareto-Turing test
We devote an astonishing amount of effort to wondering whether AI is conscious, has an inner state, or what-have-you. It’s clearly fun and exciting.
It doesn’t feel terribly useful. I am convinced that I have whatever we mean when we say conscious experience. Good on me, I suppose.
But out there in the world, the distinction between anthropos and algorithm isn’t made with the philosopher’s subtle microscope but by the market’s blind, groping hand. If an algorithm performs as much work as I do, it’s as valuable as I am; we’re interchangeable, distinguished only by the surplus our labour generates.
Zooming out, Darwinian selection may not care either. Do a rich inner world and a sensitive aesthetic help us reproduce? It seems they might have for humans, but it’s unclear to me that a machine’s reproductive fitness will involve bonding over twee indie-pop music.
6 Empirical frontier models — cash-money costs
How to estimate the cost and ROI of running a large language model to do something, as opposed to humans.
Xexéo et al. (2024) presents a model of optimal outsourcing.
Estimating costs like this is hard in general.
- DeepSeek V3 and the cost of frontier AI models
- DeepSeek FAQ – Stratechery by Ben Thompson
- Observations About LLM Inference Pricing | MIGRI TGT
- AI Model & API Providers Analysis | Artificial Analysis
- Data on the Trajectory of AI | Epoch AI Database | Epoch AI
- Algorithmic Progress in Language Models | Epoch AI
7 What should I spend my time on?
The economics of production at a microscopic, individual scale. What should I do now?
For the embodiment-and-political-incumbency angle on what kinds of human work resist automation in the short-medium term, see An orderly retreat from economic relevance.
GPT and the Economics of Cognitively Costly Writing Tasks
To analyse the effect of GPT-4 on labour efficiency and the optimal mix of capital to labour for workers who are good at using GPT versus those who aren’t when it comes to performing cognitively costly tasks, we’ll consider the Goldin and Katz modified Cobb-Douglas production function…
Is it time for the Revenge of the Normies? - by Noah Smith
For the Matt Might PhD diagram thought experiment (what happens to the boundary of knowledge when LLMs fill in the interior?) see AI and the content of human knowledge.
8 Spamularity, dark forest, textpocalypse
See Spamularity.
9 Abstract economics of cognition in general
For computation as cognition (not just human automation), see economics of cognition.
10 Economic disparity and foundation models
- Ted Chiang, Will A.I. Become the New McKinsey? looks at LLMs through Piketty’s lens: increasing returns to capital versus returns to labour
11 “Snowmobile or bicycle?”
Is the AI we have a complementary technology or a competitive one? This idea came up in a conversation with Richard Scalzo about Smith (2022).
For the knowledge-production dimension of this question — including Acemoglu, Kong, and Ozdaglar (2026)’s formal result that welfare is non-monotone in AI accuracy — see knowledge collapse and the epistemic commons.
12 Democratisation of AI
13 The Stanford Economics of Transformative AI course
It’s an interesting project. It was produced by Phil Trammell and Zach Mazlish, unless otherwise noted, for a two-week summer program hosted at the Stanford Digital Economy Lab, August 16–29, 2025
Economics of Transformative AI course materials are available here.
I have reproduced their coursework below for easier cross-reference. All work in this section is based on their materials.
Exercises accompanying some of the lectures may be found here.
13.1 Review of relevant economics
13.2 Growth
13.2.1 Task-based models: theory
13.3 The productivity J-curve (Erik Brynjolfsson)
- Brynjolfsson, Rock, and Syverson (2021)
- Brynjolfsson, Chandar, Halperin, and Trammell (in progress)
13.4 Task-based models: evidence
Slides (a, b [Arjun Ramani]), Overleaf, Recording
- Acemoglu (2025), Aghion and Bunel (2024)
- Humlum and Vestergaard (2024)
- Levine (2025 a,b)
- Brynjolfsson, Halperin, and Ramani (in progress)
13.4.1 Task-based models: selected research
Slides (a [Tomas Aguirre], b [Bharat Chandar]), Recording
- Aguirre and Manning (in progress)
- Brynjolfsson, Chandar, and Chen (2025)
13.4.2 Automating production, homogeneous output
13.4.3 Automating production, heterogeneous output
- Bessen (2018)
- Nordhaus (2021)
- Trammell (in progress)
13.4.4 Automating research: basics
- Sotala (2012)
- Aghion et al. (2019)
- Agrawal et al. (2019)
- Besiroglu, Emery-Xu, and Thompson (2024)
- Eth and Davidson (2025), Davidson and Houlden (2025)
13.4.5 Automating research: bottlenecks
13.4.6 Full automation and the Malthusian past
13.4.7 Full automation: BOTECs and bottlenecks
- Davidson and Hadshar (2025)
- Trammell (2025a,b)
13.5 Scaling, finance, risk
13.5.1 Scaling laws: basics Slides, Overleaf, Recording
13.5.2 Scaling laws: growth models
Slides (Anson Ho), Recording (12), Recording (13)
13.5.3 TAI and finance
13.5.4 AI safety
Slides (a, b [Max Reith], c [Eric Chen and Sami Petersen], d), Overleaf (a, d), Recording
- Reith (in progress)
- Thornley (2024, 2025), Thornley et al. (2025)
- Chen et al. (2024)
- Trammell (2024)
13.5.6 Existential risk and growth
13.5.7 AI governance
Slides (a, b), Overleaf (a, b), Recording
- Armstrong et al. (2015), Trager, Dafoe, Jensen, and Emery-Xu (various)
- Acemoglu and Lensman (2024), Gans (2024), Koh and Sanguanmoo (2024)
13.5.8 Choosing our future
- MacAskill (2025)
- Trammell (in progress)
- Assadi (2023), Ely and Szentes (2024)
- Finnveden et al. (2022)
14 Incoming
Anton Korinek’s many articles (Korinek 2023, 2024; Korinek and Suh 2024; Korinek and Vipra 2025; Trammell and Korinek 2023; Korinek and Stiglitz 2025), not to mention his NBER Economics of Transformative AI Workshop
-
There’s a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples.
What You Really Mean When You Claim to Support “UBI for Job Automation”: Part 1
Ten Hard Problems in and around AI
We finally published our big 90-page intro to AI. Its likely effects, from ten perspectives, ten camps. The whole gamut: ML, scientific applications, social applications, access, safety and alignment, economics, AI ethics, governance, and classical philosophy of life.
Conference Summary: Threshold 2030 - Modelling AI Economic Futures - Convergence Analysis
Predistribution over Redistribution — The Collective Intelligence Project
Maxwell Tabarrok, AGI Will Not Make Labour Worthless is a good summary of the arguments that AGI is more a quantitative than a qualitative change compared with what came before. I would prefer it phrased as “AGI is less likely to abolish the value of human labour than we previously thought” rather than a blanket statement, but YMMV.
Tom Stafford on ChatGPT as an Ouija board
Gradient Dissent, a list of reasons why large backpropagation-trained networks might be worrisome. There are some interesting points and some hyperbole. Also: If externalities did come from backprop networks (i.e. they’re a kind of methodological pollution that produces private benefits but public costs), what mechanisms should disincentivize them?
-
In this post, we evaluate whether major foundation model providers currently comply with these draft requirements and find that they largely do not. Foundation model providers rarely disclose adequate information regarding the data, compute, and deployment of their models as well as the key characteristics of the models themselves. In particular, foundation model providers generally do not comply with draft requirements to describe the use of copyrighted training data, the hardware used and emissions produced in training, and how they evaluate and test models. As a result, we recommend that policymakers prioritise transparency, informed by the AI Act’s requirements. Our assessment demonstrates that it is currently feasible for foundation model providers to comply with the AI Act, and that disclosure related to foundation models’ development, use, and performance would improve transparency in the entire ecosystem.
I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale
Invasive Diffusion: How one unwilling illustrator found herself turned into an AI model
Hi, I’m Olivia Squizzle, and I’m gonna replace AI – Pivot to AI
Bruce Schneier — On the Need for an AI Public Option
I Went to the Premiere of the First Commercially Streaming AI-Generated Movies
Lower AI Costs Will Drive Innovation, Efficiency, and Adoption
Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous (I’m not convinced, tbh.)
Anthropic Economic Index: Insights from Claude 3.7 Sonnet — Anthropic
Measuring the performance of our models on real-world tasks | OpenAI
The Final Frontiers - Gert van Vugt / AI Job Displacement Explorer
I’m skeptical that simply slotting AI into human-shaped jobs will have the results people seem to expect. The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.
The real productivity gains from AI—and the real threat of labor displacement—will come not from the “drop-in remote worker,” but from something like Dwarkesh Patel’s vision of the fully-automated firm. At some point in the life of every technology, old workflows are replaced by new ones, and we discover the paradigms in which the full productive force of a technology can best be expressed. In the past this has simply been a fact of managerial turnover or depreciation cycles. But with AI it will likely be the sheer power of the technology itself, which really is wholly unlike anything that has come before, and unlike electricity or the steam engine will eventually be able to build the structures that harness its powers by itself.


