Legibility and automation

Variational approximations to high modernism as AI modernism

July 24, 2017 — April 22, 2023

Figure 1

Miscellaneous notes on the relationship between the legibility of Great Society and the automation thereof by computer.

George Hosu argues AI and automation are at odds:

… the vast majority of use-cases for AI, especially the flashy kind that behaves in a “human-like” way, might be just fixing coordination problems around automation.

AI, from this perspective, is something like “the computational overhead of metis”.

Thus we end up with rather complex jobs; Where something like AGI could be necessary to fully replace the person. But at the same time, these jobs can be trivially automated if we redefine the role and take some of the fuzziness out.

A bartender robot is beyond the dreams of contemporary engineering. A cocktail making machine, conveyer belt (or drone) that delivers drinks, ordering and paying through a tablet on your table… beyond trivial.

I would like to return to this point. Is legibility just the simplest thing?

Henry Farrell and Marion Fourcade, The Moral Economy of High-Tech Modernism

While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes.

Figure 2

1 Policy and Statistical learning

TODO. Brief digression on how legibility and management looks as a statistical learning problem. We know that constructing policies is costly in data, and we know that administrative procedures frequently do not have much data from repeated trials of what works. We also know that coming up with policies (in a machine learning or in a political definition) is computationally challenging and data hungry. How does the need to bow to the ill-fitting bureaucracy of the Great Society resemble having to work with an underfit estimator of the optimal policy? What does that tell us about, e.g. optimal jurispudence? Possibly something. Or possibly the metaphor doesn’t work; after all, what is the optimisation problem one solves?

2 Recommender systems and collective culture

See recommender dynamics.

3 Categorization and power

See adversarial categorization.

4 Incoming

5 References

Eilat, and Rosenfeld. 2023. Performative Recommendation: Diversifying Content via Strategic Incentives.”
Farrell, and Fourcade. 2023. The Moral Economy of High-Tech Modernism.” Daedalus.
Kilbertus, Rojas Carulla, Parascandolo, et al. 2017. Avoiding Discrimination Through Causal Reasoning.” In Advances in Neural Information Processing Systems 30.
Lanier. 2010. You Are Not a Gadget: A Manifesto.
Laufer. 2020. Compounding Injustice: History and Prediction in Carceral Decision-Making.” arXiv:2005.13404 [Cs, Stat].
Raghavan. 2021. The Societal Impacts of Algorithmic Decision-Making.”
Susskind, and Susskind. 2018. The Future of the Professions.” Proceedings of the American Philosophical Society.
Venkatasubramanian, Scheidegger, Friedler, et al. 2021. Fairness in Networks: Social Capital, Information Access, and Interventions.” In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. KDD ’21.