Institutional alignment problems

Mechanism design for distributed moral wetware

February 27, 2024 — February 27, 2024

extended self
faster pussycat
game theory
incentive mechanisms
Figure 1

Let me restate the subtitle: Mechanism design for formalized distributed moral wetware, as opposed to the even more vague notion of movement design. That which, if done wrong leads to too much bureaucracy, or a failure to do the thing it was tasked to do.

1 References

Guha, Lawrence, Gailmard, et al. 2023. AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing.” George Washington Law Review, Forthcoming.
Ingstrup, Aarikka-Stenroos, and Adlin. 2021. When Institutional Logics Meet: Alignment and Misalignment in Collaboration Between Academia and Practitioners.” Industrial Marketing Management.
Korinek, and Balwit. 2022. Aligned with Whom? Direct and Social Goals for AI Systems.” Working Paper 30017.
Korinek, Fellow, Balwit, et al. n.d. “Direct and Social Goals for AI Systems.”