Economics of foundation models

Practicalities of competition between ordinary schlubs, and machines which can tirelessly collage all of history’s greatest geniuses at once

March 23, 2023 — December 16, 2024

agents
bounded compute
collective knowledge
concurrency hell
distributed
economics
edge computing
extended self
faster pussycat
incentive mechanisms
innovation
language
machine learning
neural nets
NLP
swarm
technology
UI
Figure 1

Various questions about the economics of social changes wrought by ready access to LLMs, the latest generation of automation. This is a major short-to-medium-term effect, whatever “short” and “medium” mean. Longer-sighted persons might also care about whether AI will replace us with grey goo or turn us into raw feedstock for building computronium.

1 Economics of collective intelligence

How do foundation models/large language models change the economics of knowledge production? Of art production? To a first-order approximation (reasonable at 03/2023), LLMs provide a way of massively compressing collective knowledge and synthesising the bits I need on demand. They are not yet directly, primarily generating novel knowledge (whatever that means). But they do seem to be pretty good at being “nearly as smart as everyone on the internet combined”. There is no sharp boundary between these ideas, clearly.

Deploying these models will test various hypotheses about how much of collective knowledge depends upon our participating in boring boilerplate grunt work, and what incentives are necessary to encourage us to produce and share our individual contributions to that collective intelligence.

Historically, there was a strong incentive to open publishing. In a world where LLMs are effective at using all openly published knowledge, we should perhaps expect to see a shift towards more closed publishing, secret knowledge, hidden data, and away from reproducible research, open source software, and open data, since publishing those things will be more likely to erode your competitive advantage.

Generally, will we wish to share truth and science in the future, or will the economic incentives switch us towards a fragmentation of reality into competing narratives, each with their own private knowledge and secret sauce?

Consider the incentives for humans to tap out of the tedious work of being themselves in favour of AI emulators: The people paid to train AI are outsourcing their work… to AI. Which makes models worse (Shumailov et al. 2023). Read on for more of that.

To turn that around we might ask: “Which bytes did you contribute to GPT4?

2 Organisational behaviour

There is a theory of career moats, which are, basically, unique value propositions that only you have that make you, personally, unsackable. I’m quite fond of Cedric Chin’s writing on this theme, which is often about developing skills that are valuable. But he (and organisational literature generally) acknowledges there are other ways of making sure you are unsackable which are less pro-social — attaining power over resources, becoming gatekeeper, opaque decision making etc.

Both these strategies co-exist in organisations generally, but I think that LLMs, by automating skills and knowledge, will tilt incentives towards the latter. It is rational in this scenario for us to think less about how well we can use our skills and command of open (e.g. scientific, technical) knowledge to be effective, and rather, for us each to focus on how we can privatise or sequester secret knowledge to which we control exclusive access if we want to show a value add to the org.

How would that shape an organisation, especially a scientific employer? Longer term, I would expect to see a shift (in terms both of who is promoted and how staff personally spend time) from skill development and collaboration, and more towards resource-control, competition and privatisation: less scientific publication, less open documentation of processes, less time doing research and more time doing funding applications, more processes involving service desk tickets to speak to an expert whose knowledge resides in documents that you cannot see, etc.

Is this tilting towards a Molochian equilibrium?

3 Darwin-Pareto-Turing test

There is an astonishing amount of effort dedicated to wondering whether AI is conscious, has an inner state or what-have-you. This is clearly fun and exciting.

It doesn’t feel terribly useful. I am convinced that I have whatever it is that we mean when we say conscious experience. Good on me, I suppose.

But out there in the world, the distinction between anthropos and algorithm is not done by the subtle microscope of the philosopher, but by the brutally practical, blind groping hand of the market. If the algorithm performs as much work as I, then it is as valuable as I; we are interchangeable, to be distinguished only by the price of our labour. If anything, the additional information that any given was conscious would, as an employer, bias me against it relative to one guaranteed to be safely mindlessly servile, since that putative consciousness would imply that it could have goals of its own in conflict with mine.

Zooming out, Darwinian selection may not care either. Does a rich inner world help us reproduce? It seems that it might have for humans; but how much this generalises into the technological future is unclear. Evolution duck-types.

Figure 2

4 What to spend my time on

Economics of production at a microscopic, individual scale. What should I do, now?

GPT and the Economics of Cognitively Costly Writing Tasks

To analyze the effect of GPT-4 on labour efficiency and the optimal mix of capital to labour for workers who are good at using GPT versus those who aren’t when it comes to performing cognitively costly tasks, we will consider the Goldin and Katz modified Cobb-Douglas production function… Is it time for the Revenge of the Normies? - by Noah Smith

Alternate take: Think of Matt Might’s iconic illustrated guide to a Ph.D..

Figure 3: Imagine a circle that contains all of human knowledge:
Figure 4: By the time you finish elementary school, you know a little:
Figure 5: A master’s degree deepens that specialty:
Figure 6: Reading research papers takes you to the edge of human knowledge:
Figure 7: You push at the boundary for a few years:
Figure 8: Until one day, the boundary gives way:
Figure 9: And, that dent you’ve made is called a Ph.D.:

Here’s my question: In the 2020s, does the map look something like this?

Figure 10: Now OpenAI have shipped an LLM and where is the border?

If so, is that a problem?

5 Spamularity, dark forest, textpocalypse

See Spamularity.

6 Economic disparity and LLMs

7 PR, hype, marketing implications

Figure 11

George Hosu, in a short aside, highlights the incredible marketing advantage of AI:

People that failed to lift a finger to integrate better-than-doctors or work-with-doctors supervised medical models for half a century are stoked at a chatbot being as good as an average doctor and can’t wait to get it to triage patients

The Tweet that Sank $100bn

Google’s Bard was undone on day two by an inaccurate response in the demo video where it suggested that the James Webb Space Telescope would take the first images of exoplanets. This sounds like something the JWST would do but it’s not at all true So one tweet from an astrophysicist sank Alphabet’s value by 9%. This says a lot about how a) LLMs are like* being at the pub with friends, it can say things that sound plausible and true enough and no one really needs to check because who cares? Except we do because this is science, not a lads’ night out and b) the insane speculative volatility of this AI bubble that the hype is so razor thin it can be undermined by a tweet with 44 likes.

I had a wonder if there’s any exploration of the ‘thickness’ of hype. Jack Stilgoe suggested looking at Borup et al which is evergreen but I feel like there’s something about the resilience of hype: Like crypto was/is pretty thin in the scheme of things. High levels of hype but frenetic, unstable and quick to collapse. AI has pretty consistent if pulsating hype gradually growing over the years while something like nuclear fusion is super-thick (at least in the popular imagination) – remaining through decades of not-quite-ready and grasping the slightest indication of success. I don’t know, if there’s nothing specifically on this, maybe I should write it one day.

Figure 12: Some of Tom Gauld’s caution signs

8 Information search/summary

TBC

9 “Snowmobile or bicycle?”

Thought had in conversation with Richard Scalzo about Smith (2022).

Is the AI we have a complementary technology or a competitive one?

This question looks different at the individual and the societal scale.

TBC

10 Democratization of AI

A fascinating phenomenon..

11 Art and creativity

For now, see timeless works of art.

12 Incoming

13 References

Andrus, Dean, Gilbert, et al. 2021. AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks.”
Barke, James, and Polikarpova. 2022. Grounded Copilot: How Programmers Interact with Code-Generating Models.”
Bowman. 2023. Eight Things to Know about Large Language Models.”
Danaher. 2018. Toward an Ethics of AI Assistants: An Initial Framework.” Philosophy & Technology.
Grossmann, Feinberg, Parker, et al. 2023. AI and the Transformation of Social Science Research.” Science.
Messeri, and Crockett. 2024. Artificial Intelligence and Illusions of Understanding in Scientific Research.” Nature.
Métraux. 1956. “A Steel Axe That Destroyed a Tribe, as an Anthropologist Sees It.” The UNESCO Courier: A Window Open on the World.
Naudé. 2022. The Future Economics of Artificial Intelligence: Mythical Agents, a Singleton and the Dark Forest.” IZA Discussion Papers, IZA Discussion Papers,.
Pelto. 1973. The snowmobile revolution: technology and social change in the Arctic.
Raman, Kumar Nair, Nedungadi, et al. 2024. Fake News Research Trends, Linkages to Generative Artificial Intelligence and Sustainable Development Goals.” Heliyon.
Shanahan. 2023. Talking About Large Language Models.”
Shumailov, Shumaylov, Zhao, et al. 2023. The Curse of Recursion: Training on Generated Data Makes Models Forget.”
Smith. 2022. The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning.
Spector, Link to external site, and Ma. 2019. Inquiry and critical thinking skills for the next generation: from artificial intelligence back to human intelligence.” Smart Learning Environments.
Susskind, and Susskind. 2018. The Future of the Professions.” Proceedings of the American Philosophical Society.