A

AnonAccount

29 karmaJoined

Comments
1

I'm very new to the EA movement, but I wonder how much EA has actually shifted from "neartermism" to longtermism, instead of having always been about both.

I see comments from 10 years ago saying things like

80k members give to a variety of causes. When we surveyed, 34% were intending to give to x-risk, and it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area. [...]

If I save the life of someone in the developing world, almost all the benefit I produce is through compounding effects: I speed up technological progress by a tiny margin, giving us a little bit more time at the end of civilisation, when there are far more people. This benefit dwarfs the benefit to the individual whose life I've saved (as Bostrom argues in the first half of Astronomical Waste). Now, I also increase the amount of animal suffering, because the person whose life I've saved consumes meat, and I speed up development of the country, which means that the country starts factory farming sooner. However, we should expect (or, at least, I expect) factory farming to disappear within the next few centuries, as cheaper and tastier meat substitutes are developed. So the increase in animal suffering doesn't compound in the same way: whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming).

So let's say the benefit to the person from having their live saved is N. The magnitude of the harm from increasing factory farming might be a bit more than N: maybe -10N. But the benefit from speeding up technological progress is vastly greater than that: 1000N, or something. So it's still a good thing to save someone's life in the developing world. (Though of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfed by existential risk mitigation).

Of course, it's just a random comment, and the actual views of the community were certainly more complex. But it doesn't seem different from the current views?
Or are you referring to the very early days of EA? (2009 and not 2012)
Or to the fact that now more than ~34% of people in EA are focused on x-risk?
Or did EA use to present itself to outsiders as being about neartermisms, while keeping the longermist stuff more internal? 


In practice, it seems that Global Health and Development still gets the most funding, at least from OpenPhil and GWWC. Do you think the balance has shifted mostly in terms of narrative or in terms of actions?

Disclaimer: as mentioned I'm relatively new, I have not read Doing Good Better or What We Owe the Future.