Disruptive technology

October 10, 2014 — October 10, 2014

economics
technology

Here follows an untidy smudge of ideas around an abominable buzzword.

Dynamics of collective learning, with an emphasis of what this means at the economic end of the spectrum.

What kind of stochastic process is multi-agent learning in human beings? How can we model it? Artificial chemistry? Wolpert-style COINs? Feldman-style “Turing gas”? Pattern-matching string transformations? Information theoretic bounds on agent model updating? A network representation of hypothesis formation? For what its worth, my intuition is that something like a combination of artificial chemistry and a statistical model of pattern matching could give some insights into a toy model

If the aggregate system is, as seems likely, unpredictable, what bounds does knowing about an underlying stochastic process place on system evolution? How different would technology be if we “ran the tape twice”? Is there an underlying topology to what innovations can be fostered? Surely the limits imposed by every individual agent’s learning imposes certain constraints on what overall structures can be evolved?

If the adaptive process of innovation is constrained by the structure of adaptive human learning, how is it constrained by the underlying physical reality? Reality, viewed from the perspective of making and testing hypotheses about it, is not a homogeneous state space with constant reward, but possesses a fitness landscape that favours some combination of truth and ease of applicability. (Solar panels work best in the desert but not when it is too hot, anti-gravity machines don’t work anywhere, Newton’s equations of motion are more readily deducible than relativistic ones at the velocities at which we commonly operate, ready availability of fossil fuels favours polymer-based construction materials etc — do you get my drift?) Can we capture that contextuality somehow?

On that latter, physical technology, note, I suspect that the fitness landscape of economic innovation would have something to do with informational constraints of human learners, but also of the literal geophysical landscape— How much energy is available, and how? There is a lot of work with this in material stocks and flows analysis, and also in the field of ecology. A suggestive term from the engineering/ecology literature is exergy — thermodynamically available energy. (Approximately — how much energy Laplace’s demon can on average extract from a system with noise.)

Interaction of adaptors on different time-scales: evolutionary versus cultural time-scales, dynamics that are hard or easy to learn, frequent and infrequent event-types… can any regularities survive such heterogeneity? Convergence theorems applied to nonstationary targets.

So, questions this approach seems useful for are: what are the transitions paths to non-carbon-intensive energy systems? How can we quantify the “disruptiveness” of a technology? Can we identify unfilled technological niches in this way? What would a society based on alternative energy forms look like? Which industries are dead in the water?