Not directly related to the term altruism, but rather a more disruptive-soundin’ branding of “empirical charity”.
There’s probably interesting stuff to say about optimal altruism, but I’m probably not the one to do it. Except to note that it’s complicated, and note some weird bits.
Bjørn Lomborg, for example, gives an eloquent justification of the need to account for opportunity costs (a dollar spent saving rich people from cancer is a dollar not spent saving poor people from malaria) but then makes abysmal recommendation for optimality based on shallow marginalist economics in the most inappropriate of places. Don’t get me started on his assessment of tail risk. We need a better Bjørn Lomborg. What is the opportunity cost of keeping this Bjørn Lomborg? The ROI on investing in better Bjørn Lomborgs?
Aaaaanyway, indistinct generalised rants aside, this page mostly exists to bookmark 80000hours, a career-advice site, because I thought their considered analytic posturing was sweet if awkward, and it reminds me of over-earnest boyfriends, e.g.
Which would you choose from these two options?
- Prevent one person from suffering next year.
- Prevent 100 people from suffering (the same amount) 100 years from now.
Most people choose the second option. It’s a crude example, but it suggests that they value future generations.
If people didn’t want to leave a legacy to future generations, it would be hard to understand why we invest so much in science, create art, and preserve the wilderness.
We’d certainly choose the second option. […]
First, future generations matter, but they can’t vote, they can’t buy things, and they can’t stand up for their interests. This means our system neglects them; just look at what is happening with issues like climate change.
Second, their plight is abstract. We’re reminded of issues like global poverty and factory farming far more often. But we can’t so easily visualise suffering that will happen in the future. Future generations rely on our goodwill, and even that is hard to muster.
Third, there will probably be many more people alive in the future than there are today.
You might be entertained to discover that their top recommendations for problems to tackle are
Risks from artificial intelligence
Promoting effective altruism
Global priorities research
Like many of the “rationalist community” projects, though, whilst I find it faintly embarrassing to be caught reading, there is some interesting new DIY ethics in there along side the rediscovered old stuff and prematurely quantified stuff. See also lesswrong etc, trolley problems