Economics of automation, especially cognitive automation

Practicalities of competition between ordinary schlubs, and machines which can tirelessly collage all of history’s greatest geniuses at once

September 20, 2021 — April 15, 2025

economics
faster pussycat
innovation
language
machine learning
mind
neural nets
technology
UI

What does innovation in automation mean for the economy as it pertains to people?

Figure 1

Soundtrack: Machines work, by B(iftek) (video clip).

Daron Acemoglu has written much on the economics of modern automation recently(, , ).

Many others have been thinking about this for a long time. It is tricky. What even is the role of manufacturing in the economy? How much does automation affect dematerialised economies? How about at the singularity?

Erusian and Doug Summers-Stay, Will Automation Lead to Economic Crisis?

tl;dr: Until the pace of automation increases faster than new jobs can be created, AI shouldn’t be expected to cause mass unemployment or anything like that. When AI can pick up a new job as quickly and cheaply as a person can, then the economy will break (but everything else will break too, because that would be the Singularity).

Figure 2

As usual, Scott Alexander’s opinion might not be definitive but it does point to some interesting stuff: Technological Unemployment: Much More Than You Wanted To Know.

Various questions about the economics of social changes wrought by ready access to LLMs, the latest generation of automation. This is a short-to-medium-term frame question, whatever “short” and “medium” mean.

Longer-term, some folks might also be interested in whether AI will replace us with grey goo or turn us into raw feedstock for building computronium etc.

1 Economics of collective intelligence

Well, it’s really terribly simple, […] it works any way you want it to. You see, the computer that runs it is a rather advanced one. In fact, it is more powerful than the sum total of all the computers on this planet including—and this is the tricky part— including itself.

— Douglas Adams, Dirk Gently’s Holistic Detective Agency

How do foundation models/large language models change the economics of knowledge and art production? To a first-order approximation (reasonable at 03/2023), LLMs provide a way of massively compressing collective knowledge and synthesising the bits I need on demand. They are not yet primarily generating novel knowledge (whatever that means). But they do seem pretty good at being “nearly as smart as everyone on the internet combined”. I am not sure this distinction is so easy to delineate however. As someone whose career has been built on interdisciplinary work, and has frequently been asked to “synthesise” knowledge from different domains, I am not sure that the distinction between “synthesising” and “generating” is so clear cut; certainly much of my publication track record is “merely” synthesising, and it was bitter and hard work.

1.1 Intellectual property and incentives

Using these models will test various hypotheses about how much collective knowledge depends on our participation in boring boilerplate grunt work, and what incentives are necessary to encourage us to produce and share our individual contributions to that collective intelligence.

Historically, there was a strong incentive for open publishing. In a world where LLMs effectively use all openly published knowledge, we might see a shift towards more closed publishing, secret knowledge, hidden data, and away from reproducible research, open-source software, and open data since publishing those things will be more likely to erode your competitive advantage.

Generally, will we wish to share truth and science in the future, or will economic incentives switch us towards a fragmentation of reality into competing narratives, each with its own private knowledge and secret sauce?

Consider the incentives for humans to tap out of the tedious work of being themselves in favour of AI emulators: The people paid to train AI are outsourcing their work… to AI. This makes models worse (). Read on for more.

We might ask: “Which bytes did you contribute to GPT4?

2 Organisational behaviour

There is a theory of career moats, which are basically unique value propositions that only you have that make you unsackable. I’m quite fond of Cedric Chin’s writing on this theme, which is often about developing valuable skills. But he (and organisational literature generally) acknowledges there are other ways of ensuring unsackability which are less pro-social — attaining power over resources, becoming a gatekeeper, opaque decision making, etc.

Both these strategies co-exist in organisations generally, but I think it likely that LLMs, by automating skills and knowledge, tilt incentives towards the latter. It is rational in this scenario for us to think less about how well we can use our skills and command of open (e.g., scientific, technical) knowledge to be effective, and rather, for us to focus on how we can privatise or sequester secret knowledge to which we control exclusive access if we want to show a value add to the organisation.

How would that shape an organisation, especially a scientific employer? Longer term, I’d expect to see a shift (in terms both of who is promoted and how staff personally spend time) from skill development and collaboration, towards resource control, competition, and privatisation: less scientific publication, less open documentation of processes, less time doing research and more time doing funding applications, more processes involving service desk tickets to speak to an expert whose knowledge resides in documents that you cannot see.

3 Darwin-Pareto-Turing test

There is an astonishing amount of effort dedicated to wondering whether AI is conscious, has an inner state or what-have-you. This is clearly fun and exciting.

It doesn’t feel terribly useful. I am convinced that I have whatever it is that we mean when we say conscious experience. Good on me, I suppose.

But out there in the world, the distinction between anthropos and algorithm is not done by the subtle microscope of the philosopher but by the brutally practical, blind groping hand of the market. If the algorithm performs as much work as I, then it is as valuable as I; we are interchangeable, to be distinguished only by the price of our labour. If anything, the additional information that AI was conscious would, as an employer, bias me against it relative to one guaranteed to be safely mindlessly servile since that putative consciousness would imply that it could have goals of its own in conflict with mine.

Zooming out, Darwinian selection may not care either. Does a rich inner world help us reproduce? It seems that it might have for humans; but how much this generalises into the technological future is unclear. Evolution duck-types.

Figure 3

4 What to spend my time on

Economics of production at a microscopic, individual scale. What should I do, now?

GPT and the Economics of Cognitively Costly Writing Tasks

To analyse the effect of GPT-4 on labour efficiency and the optimal mix of capital to labour for workers who are good at using GPT versus those who aren’t when it comes to performing cognitively costly tasks, we’ll consider the Goldin and Katz modified Cobb-Douglas production function…

Is it time for the Revenge of the Normies? - by Noah Smith

Alternate take: Think of Matt Might’s iconic illustrated guide to a Ph.D..

Figure 4: Imagine a circle that contains all of human knowledge:
Figure 5: By the time you finish elementary school, you know a little:
Figure 6: A master’s degree deepens that specialty:
Figure 7: Reading research papers takes you to the edge of human knowledge:
Figure 8: You push at the boundary for a few years:
Figure 9: Until one day, the boundary gives way:
Figure 10: And, that dent you’ve made is called a Ph.D.:

Here’s my question: In the 2020s, does the map look something like this?

Figure 11: Now OpenAI has shipped an LLM and where is the border?

If so, is it a problem?

5 Spamularity, dark forest, textpocalypse

See Spamularity.

6 Economic disparity and LLMs

7 “Snowmobile or bicycle?”

Thought had in conversation with Richard Scalzo about ().

Is the AI we have a complementary technology or a competitive one?

This question looks different at the individual and societal scale.

For some early indications, see Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared” (). I do have many qualms about the actual experimental question they are answering there, but it’s a start.

TBC

8 Democratisation of AI

A fascinating phenomenon..

9 Incoming

10 References

Acemoglu, and Johnson. 2023. Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity.
Acemoglu, and Restrepo. 2018. Artificial Intelligence, Automation and Work.” In The Economics of Artificial Intelligence: An Agenda.
———. 2020. The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand.” Cambridge Journal of Regions, Economy and Society.
Andrus, Dean, Gilbert, et al. 2021. AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks.”
Babina, Fedyk, He, et al. 2021. Artificial Intelligence, Firm Growth, and Industry Concentration.” SSRN Scholarly Paper ID 3651052.
Danaher. 2018. Toward an Ethics of AI Assistants: An Initial Framework.” Philosophy & Technology.
Eloundou, Manning, Mishkin, et al. 2023. GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.”
Felten, Raj, and Seamans. 2019. The Occupational Impact of Artificial Intelligence: Labor, Skills, and Polarization.” SSRN Scholarly Paper ID 3368605.
Kalyani, Bloom, Carvalho, et al. 2025. The Diffusion of New Technologies.” The Quarterly Journal of Economics.
Lane, and Saint-Martin. 2021. The Impact of Artificial Intelligence on the Labour Market: What Do We Know so Far?
Thirunavukarasu, Ting, Elangovan, et al. 2023. Large Language Models in Medicine.” Nature Medicine.