Disruptive technology
October 9, 2014 — October 9, 2014
Here follows an untidy smudge of ideas around an abominable buzzword.
- Think evolutionary models of fitness and mutations. Some combination of reinforcement learning convergence dynamics, and unknown production and utility constraints, network effects and free rider effects (still ignores supply chain stuff).
- A highly recursive nonlinear SDE on a reasonably exotic space. Even then it would miss some things, such as (apparently) improved information aggregation in digital technology.
- Or could that be subsumed into artful fitting of distributions of networks of interactions of goods bundles?
- How would you model firms?
- How about individuals?
- Technology as a virus?
- Patent network statistics
- What structure does industry X’s technology possess?
- Directed, dependency graph.
- Explaining the South Korea chemical path.
- Investing in a particular technology bundle.
- Korea’s success in cars versus India’s failure.
Dynamics of collective learning, with an emphasis on what this means at the economic end of the spectrum.
What kind of stochastic process is multi-agent learning in human beings? How can we model it? Artificial chemistry? Wolpert-style COINs? Feldman-style “Turing gas”? Pattern-matching string transformations? Information theoretic bounds on agent model updating? A network representation of hypothesis formation? For what it’s worth, my intuition is that something like a combination of artificial chemistry and a statistical model of pattern matching could give some insights into a toy model.
If the aggregate system is, as seems likely, unpredictable, what bounds does knowing about an underlying stochastic process place on system evolution? How different would technology be if we “ran the tape twice”? Is there an underlying topology to what innovations can be fostered? Surely the limits imposed by every individual agent’s learning impose certain constraints on what overall structures can be evolved?
If the adaptive process of innovation is constrained by the structure of adaptive human learning, how is it constrained by the underlying physical reality? Reality, viewed from the perspective of making and testing hypotheses about it, is not a homogeneous state space with constant reward, but possesses a fitness landscape that favours some combination of truth and ease of applicability. (Solar panels work best in the desert but not when it is too hot, anti-gravity machines don’t work anywhere, Newton’s equations of motion are more readily deducible than relativistic ones at the velocities at which we commonly operate, ready availability of fossil fuels favours polymer-based construction materials etc — do you get my drift?) Can we capture that contextuality somehow?
On that latter, physical technology, note, I suspect that the fitness landscape of economic innovation would have something to do with informational constraints of human learners, but also of the literal geophysical landscape — How much energy is available, and how? There is a lot of work with this in material stocks and flows analysis, and also in the field of ecology. A suggestive term from the engineering/ecology literature is exergy — thermodynamically available energy. (Approximately — how much energy Laplace’s demon can on average extract from a system with noise.)
Interaction of adaptors on different time-scales: evolutionary versus cultural time-scales, dynamics that are hard or easy to learn, frequent and infrequent event-types… can any regularities survive such heterogeneity? Convergence theorems applied to nonstationary targets.
So, questions this approach seems useful for are: what are the transition paths to non-carbon-intensive energy systems? How can we quantify the “disruptiveness” of a technology? Can we identify unfilled technological niches in this way? What would a society based on alternative energy forms look like? Which industries are dead in the water?
Scope of Work, Notes, 2023-07-10, on innovation
In his 1975 memoir The Periodic Table, Primo Levi recounts a brief anecdote about a mysterious slice of onion in a recipe for oil varnish. Levi, an Italian-Jewish writer, chemist, Holocaust survivor, and anti-Fascist, was working in a paint factory after the war. He came across a varnish formula, published in 1942, that included two slices of onion added to the boiling linseed oil near the end of the process. Why onion? After talking to a mentor, he learned that in the days before thermometers were common, slices of raw onion were used to gauge the temperature of the oil. The onion remained in the recipe long after its usefulness had ended, and “what had been a crude measuring operation had lost its significance and was transformed into a mysterious and magical practice”.
“The onion in the varnish” has since become a popular metaphor among computer scientists, entrepreneurs, and rationalist types – a shorthand for the importance of eliminating inessential elements that creep into a process. However, that’s not exactly what Levi describes, and it isn’t the moral of his story. His onion anecdote is told in the context of a longer conversation about the ways an ancient process like varnish manufacturing “retains in its crannies … rudiments of customs and procedures abandoned for a long time now”. This accretion isn’t a liability or a problem per se, but rather an inevitability as processes and cultures evolve over a long duration.