IMO, not directly related to the term altruism, but rather a more disruptive-soundin’ branding of “empirical charity”.
There’s probably interesting stuff to say about optimal altruism, but I’m probably not the one to do it. Except to note that it’s complicated, and note some weird bits.
Bjørn Lomborg, for example, gives an eloquent justification of the need to account for opportunity costs (a dollar spent saving rich people from cancer is a dollar not spent saving poor people from malaria) but then makes abysmal recommendation for optimality based on shallow marginalist economics in the most inappropriate of places. Don’t get me started on his assessment of tail risk. We need a better Bjørn Lomborg. What is the opportunity cost of keeping this Bjørn Lomborg? The ROI on investing in better Bjørn Lomborgs?
Aaaaanyway, indistinct generalised rants aside, this page mostly exists to bookmark 80000hours, a career-advice site, because I thought their considered analytic posturing was sweet if awkward, and it reminds me of over-earnest boyfriends, e.g.
Which would you choose from these two options?
- Prevent one person from suffering next year.
- Prevent 100 people from suffering (the same amount) 100 years from now.
Most people choose the second option. It’s a crude example, but it suggests that they value future generations.
If people didn’t want to leave a legacy to future generations, it would be hard to understand why we invest so much in science, create art, and preserve the wilderness.
We’d certainly choose the second option. […]
First, future generations matter, but they can’t vote, they can’t buy things, and they can’t stand up for their interests. This means our system neglects them; just look at what is happening with issues like climate change.
Second, their plight is abstract. We’re reminded of issues like global poverty and factory farming far more often. But we can’t so easily visualise suffering that will happen in the future. Future generations rely on our goodwill, and even that is hard to muster.
Third, there will probably be many more people alive in the future than there are today.
You might be entertained to discover that their top recommendations for problems to tackle are
- Risks from artificial intelligence
- Promoting effective altruism
- Global priorities research
Like many of the “rationalist community” projects, though, whilst I find it faintly embarrassing to be caught reading, there is some interesting new DIY ethics in there along side the rediscovered old stuff and prematurely quantified stuff. See also lesswrong etc, trolley problems.
Here’s a question for the actual effective altruists to answer? Does anyone have a better version of this next quote? I want one which looks at the net costs of different modes of distribution instead of lumping everything into “capitalism” vs “other”.
The core problem is the bourgeois moral philosophy that the movement rests upon. Effective Altruists abstract from—and thereby exonerate—the social dynamics constitutive of capitalism. […] capital’s commodification of necessities directly undermines the self-sufficiency of entire populations by determining how resources are allocated. […]
In the meantime, capital extracts around $2 trillion annually from “developing countries” through things like illicit financial flows, tax evasion, debt service, and trade policies advantageous to the global capitalist class. […]
These dynamics, which spring from capital’s insistence on the commodification of necessities, are what turn billions of people into drowning strangers and generate a need for ever-multiplying charitable organizations in the first place.
I am skeptical of incrementalist behaviour in EA myself, and in particular the preference for underwhelming changes with low-risk but unspectacular impact (subsidising mosquito net distribution), versus high-risk, structurally revolutionary changes (Taking land from the elites and giving it to peasants). There is an implicit risk aversion there, about which I have Opinions that I should muster.
I think decent decision theory will allow us to favour both those over charities that are very unlikwley to do much good by any plausible metric. Also, I do like the fundamental EA insight that opportunity costs are important in charity; I would like to keep that around.