Economics of cognitive and labour automation
Practicalities of competition between ordinary schlubs, and machines which can tirelessly collage all of history’s greatest geniuses at once
2021-09-20 — 2025-12-09
Wherein the rise of foundation models is observed to compress collective knowledge, the incentive to privatise boilerplate toil into secret repositories is noted, and increasing returns to capital for frontier developers are sketched.
Economics of automation applied to AI.
I am actively researching this topic at the moment and accordingly these notes are absolute chaos.
WIP.
1 At the scale of the economy
There are many models. Here’s one that I’m sceptical of:
Erusian and Doug Summers-Stay, Will Automation Lead to Economic Crisis?
tl;dr: Until the pace of automation increases faster than new jobs can be created, AI shouldn’t be expected to cause mass unemployment or anything like that. When AI can pick up a new job as quickly and cheaply as a person can, then the economy will break (but everything else will break too, because that would be the Singularity).
2 Economics of collective intelligence
Well, it’s really terribly simple, […] it works any way you want it to. You see, the computer that runs it is a rather advanced one. In fact, it is more powerful than the sum total of all the computers on this planet including—and this is the tricky part— including itself.
— Douglas Adams, Dirk Gently’s Holistic Detective Agency
How do foundation models/large language models change the economics of knowledge and art production? To a first-order approximation (reasonable at 03/2023), LLMs provide a way of massively compressing collective knowledge and synthesising the bits I need on demand. They are not yet primarily generating novel knowledge (whatever that means). But they do seem pretty good at being “nearly as smart as everyone on the internet combined”. I’m not sure this distinction is so easy to delineate, however. As someone whose career has been built on interdisciplinary work and who has frequently been asked to “synthesise” knowledge from different domains, I’m not sure the distinction between “synthesising” and “generating” is so clear-cut; certainly much of my publication track record is “merely” synthesising, and it was bitter and hard work.
2.1 Intellectual property and incentives
Using these models will test how much collective knowledge depends on our participation in boring, boilerplate grunt work, and what incentives are necessary to encourage us to produce and share our individual contributions.
Historically, there was a strong incentive for open publishing. In a world where LLMs effectively use all openly published knowledge, we might see a shift towards more closed publishing, secret knowledge, hidden data, and away from reproducible research, open-source software, and open data, since publishing those things will be more likely to erode our competitive advantage.
Generally, will we wish to share truth and science in the future, or will economic incentives switch us towards a fragmentation of reality into competing narratives, each with its own private knowledge and secret sauce?
Consider the incentives for humans to tap out of the tedious work of being themselves in favour of AI emulators: The people paid to train AI are outsourcing their work… to AI. This makes models worse (Shumailov et al. 2023). Read on for more.
We might ask: “Which bytes did you contribute to GPT4?”
3 Returns to scale for frontier model developers
Deepseek is a Chinese company rather than a community. But they seem to be changing the game in terms of cost, accessibility, and openness of AI models. TBD.
Leaked Google document: “We Have No Moat, And Neither Does OpenAI” asserts that large corporations are concerned that LLMs do not provide sufficient return on capital.
4 Organisational behaviour
There is a theory of career moats, which are basically unique value propositions only you have, which make you unsackable. I’m quite fond of Cedric Chin’s writing on this theme, which is often about developing valuable skills. But he, and organisational literature generally, acknowledges there are other ways of ensuring unsackability which are less pro-social — for example, attaining power over resources, becoming a gatekeeper, or opaque decision-making.
Both these strategies co-exist in organisations, but I think it’s likely that LLMs, by automating skills and knowledge, tilt incentives towards the latter. In that scenario, it’s rational for us to worry less about how well we use our skills and command of open (e.g., scientific, technical) knowledge to be effective, and instead to focus on how we can privatise or sequester secret knowledge that we control exclusively if we want to show a value-add to the organisation.
How would that shape an organisation, especially a scientific employer? Longer term, I’d expect to see a shift (in terms both of who is promoted and how staff personally spend time) from skill development and collaboration, towards resource control, competition, and privatisation: less scientific publication, less open documentation of processes, less time doing research and more time doing funding applications, more processes involving service desk tickets to speak to an expert whose knowledge resides in documents we cannot see.
5 Darwin-Pareto-Turing test
We devote an astonishing amount of effort to wondering whether AI is conscious, has an inner state, or what-have-you. It’s clearly fun and exciting.
It doesn’t feel terribly useful. I am convinced that I have whatever we mean when we say conscious experience. Good on me, I suppose.
But out there in the world, the distinction between anthropos and algorithm isn’t made with the philosopher’s subtle microscope but by the market’s blind, groping hand. If an algorithm performs as much work as I do, it’s as valuable as I am; we’re interchangeable, distinguished only by the surplus our labour generates.
Zooming out, Darwinian selection may not care either. Does a rich inner world and a sensitive aesthetic help us reproduce? It seems it might have for humans, but it’s unclear to me that a machine’s reproductive fitness will involve bonding over twee indie-pop music.
6 Empirical frontier models — cash-money costs
How to estimate the cost/ROI of running a large language model to do something
Xexéo et al. (2024) is a model of optimal outsourcing.
Estimating costs like this seems hard in general.
- DeepSeek V3 and the cost of frontier AI models
- DeepSeek FAQ – Stratechery by Ben Thompson
- Observations About LLM Inference Pricing | MIGRI TGT
- AI Model & API Providers Analysis | Artificial Analysis
- Data on the Trajectory of AI | Epoch AI Database | Epoch AI
- Algorithmic Progress in Language Models | Epoch AI
7 What should I spend my time on
Economics of production at a microscopic, individual scale. What should I do now?
GPT and the Economics of Cognitively Costly Writing Tasks
To analyse the effect of GPT-4 on labour efficiency and the optimal mix of capital to labour for workers who are good at using GPT versus those who aren’t when it comes to performing cognitively costly tasks, we’ll consider the Goldin and Katz modified Cobb-Douglas production function…
Is it time for the Revenge of the Normies? - by Noah Smith
Alternate take: Think of Matt Might’s iconic illustrated guide to a Ph.D..
Here’s my question: In the 2020s, does the map look like this?
If so, is it a problem?
8 Spamularity, dark forest, textpocalypse
See Spamularity.
9 Abstract economics of cognition in general
10 Economic disparity and foundation models
- Ted Chiang, Will A.I. Become the New McKinsey? looks at LLMs through Piketty’s lens: increasing returns to capital versus returns to labour
11 “Snowmobile or bicycle?”
This idea came up in conversation with Richard Scalzo about Smith (2022).
Is the AI we have a complementary technology or a competitive one?
This question looks different at the individual and societal scale.
For some early indications, see Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared” (Lee 2025). I have many qualms about the experimental question they’re actually answering there, but it’s a start.
TBC
12 Democratisation of AI
13 Incoming
Anton Korinek’s many articles (Korinek 2023, 2024; Korinek and Suh 2024; Korinek and Vipra 2025; Trammell and Korinek 2023; Korinek and Stiglitz 2025), not to mention his NBER Economics of Transformative AI Workshop
Economics of Transformative AI course materials, produced by Phil Trammell and Zach Mazlish, unless otherwise noted, for a two-week summer program hosted at the Stanford Digital Economy Lab, August 16–29, 2025
-
There’s a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples.
What You Really Mean When You Claim to Support “UBI for Job Automation”: Part 1 — EA Forum
Ten Hard Problems in and around AI
We finally published our big 90-page intro to AI. Its likely effects, from ten perspectives, ten camps. The whole gamut: ML, scientific applications, social applications, access, safety and alignment, economics, AI ethics, governance, and classical philosophy of life.
Conference Summary: Threshold 2030 - Modelling AI Economic Futures - Convergence Analysis
Predistribution over Redistribution — The Collective Intelligence Project
Maxwell Tabarrok, AGI Will Not Make Labour Worthless is a good summary of the arguments that AGI is just quantitatively rather than qualitatively different from what has gone before. I would prefer it phrased as “AGI is less likely to abolish the value of human labour than we previously thought” rather than a blanket statement, but YMMV.
Tom Stafford on ChatGPT as an Ouija board
Gradient Dissent, a list of reasons that large backpropagation-trained networks might be worrisome. Some interesting points there, and some hyperbole. Also: If it were true that externalities come from backprop networks (i.e. that they are a kind of methodological pollution that produces private benefits but public costs) then what kind of mechanisms should disincentivize them?
-
In this post, we evaluate whether major foundation model providers currently comply with these draft requirements and find that they largely do not. Foundation model providers rarely disclose adequate information regarding the data, compute, and deployment of their models as well as the key characteristics of the models themselves. In particular, foundation model providers generally do not comply with draft requirements to describe the use of copyrighted training data, the hardware used and emissions produced in training, and how they evaluate and test models. As a result, we recommend that policymakers prioritise transparency, informed by the AI Act’s requirements. Our assessment demonstrates that it is currently feasible for foundation model providers to comply with the AI Act, and that disclosure related to foundation models’ development, use, and performance would improve transparency in the entire ecosystem.
I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale
Invasive Diffusion: How one unwilling illustrator found herself turned into an AI model
Hi, I’m Olivia Squizzle, and I’m gonna replace AI – Pivot to AI
Bruce Schneier — On the Need for an AI Public Option
I Went to the Premiere of the First Commercially Streaming AI-Generated Movies
Lower AI Costs Will Drive Innovation, Efficiency, and Adoption
Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous (I’m not convinced, tbh)
Anthropic Economic Index: Insights from Claude 3.7 Sonnet — Anthropic
Measuring the performance of our models on real-world tasks | OpenAI










