I’ve been dropping in to Bob Marks’s Simulation and Social Sciences class at the Australian Graduate School of Management. Despite the brown colour scheme of the course website, it has turned out to be gripping. Bob has operations research skills amongst his various talents, which is not something I’ve been much exposed to before. He also has a lot to say about model validation and what that might mean in different problem domains (e.g. what sort of model validation is necessary for a business decision as opposed to a Physica A article). The class size is small and the readings are forbiddingly diverse. I summarised one for the group in one of those zoomey flash presentations.
The thoughts with which the course inspired me: When can human beings’ arbitrarily complex decisions rules can become amenable to modelling in aggregate?
1. One contender: when they aren’t thinking about it very hard, and allocate fewer cognitive resources to it. In that circumstance we can posit they may be modelled usefully by something that is no more complex than the effective cognitive resource they devote to the question at hand. (Consider Herbert Simon’s notion of *docility*, for an example) 2. Another: in environment of constrained choice…? Consider De Mesquita on game-theoretic rational self interest models. Or, for that matter, all of neo-classical microeconomic theory. 3. Or, if this isn’t a repeat of the last point, when we construct institutions to constrain our choices, by norms or some such. Think Elinor Ostrom. 4. When we are considering a problem domain over which human reason is known to be strongly bounded. See the massive Heuristics and Biases program of Kahnemann and Tversky’s intellectual progeny for this one. 5. When simplicity is an emergent property because, say, all the incredibly complex behaviour of the agents is not strongly correlated, and as such concerned adds to a simple stochastic variable. Market fluctuations might be an example of this, at least over some intervals.
Now, to look at this research question from the other end: given the immensely complicated nature of human systems and the ridiculous number of potential variables, there is an interesting question here about how any discernible pattern emerges and sustains over time and yet low(er) dimensional regularities are apparent if only temporarily. The question is- when can these human systems be modelled with a low number of dimensions? What questions can we answer about them without assuming away the complexity that makes it interesting?
See also: agent_based-models.