Moral wetware
What ethical operating systems can be executed on our neurosocial substrate?
April 27, 2020 — December 5, 2023
Being right does not feel good, but many definitions of right and good. When can these concepts align?
I think it is helpful to think about feasible social systems in terms of moral wetware. What is the kindest moral system that can be projected onto the cooperative subsystems enabled by evolution in our social brains? What ethical systems are learnable, and how well do they generalise to out-of-distribution samples? How do we interpret our feelings, our beliefs and our tribes in this light? We can devise fairly complicated subsystems to do create, but not all are equally viable or effective, and the reasons are often opaque to us. Do we need to include awareness of our biases into mechanism design?
In the great human project of working this out by trial and error, I am personally testing the hypothesis that permissive, nourishing systems are attainable and sustainable, even in the modern public sphere, but I could be proven wrong.
Notes ongoing.
1 Incoming
- Casting out the wolf in our midst, on the Kóryos and management/harnessing of youthful male aggression
- Complexity Rising: From Human Beings to Human Civilization, a Complexity Profile
- Yaneer Bar-Yam, Ethical values: A multiscale scientific perspective
- The Return of Communism - by Robin Hanson
- A critique of Moral foundations theory (Haidt 2013) from a notionally biological standpoint: Suhler and Churchland (2011)
- eigenrobot, in effective altruism and its future runs through the status calculus of social agency capture
- Deep atheism and AI risk - Joe Carlsmith
Can we explain the algorithms in moral wetware?