Kelsey Piper recently posted on Vox "Caring about the future doesn’t mean ignoring the present" which argues that effective altruism hasn’t abandoned its roots and that longtermist goals for the future don’t hurt people in the present.

In particular it highlights that effective altruism is growing on all fronts and highlight's GiveWell's fundraising.

GiveWell's Funds Raised

It also highlights that the growth of the movement brings a wider range of people in more than it pulls funding away from one cause to another: it's a symbiotic relationship.

You could imagine the EA movement growing to the point where further growth is mostly about persuading people of intra-movement priority changes, but that day is very far in the future.

This is not to say that I think effective altruism should just be about whatever EAs want to do or fund. Prioritization of causes is at the heart of the movement — it’s the “effective” in effective altruism.

But the recent funding data does incline me toward worrying less that new focuses for effective altruism will come at the direct expense of existing ones, or that we must sacrifice the welfare of the present for the possibilities of the future.

In a growing, energized, and increasingly powerful movement, there is plenty of passion — and money — to go around.

15

0
0

Reactions

0
0
Comments5


Sorted by Click to highlight new comments since:

I agree that increased interest in longtermism hasn't caused EA as a whole to decrease funding to other causes in practice. But I don't think that this is in itself good. As the article acknowledges, prioritising between causes is an essential part of doing EA.

So if, all things considered, we thought that dropping all work helping present generations to exclusively prioritise future generations would lead to better outcomes, I think we should be willing to do that.

I particularly disagree with this quote from the article:

But if the shift to longtermism meant that effective altruists would stop helping the people of the present, and would instead put all their money and energy into projects meant to help the distant future, it would be doing an obvious and immediate harm.

Someone could equally well argue that prioritising bednets or animal advocacy over helping local homeless people would be bad because it is an obvious and immediate harm, but I think that they would be making an important mistake.

Of course, there may be instrumental reasons to keep prioritising global health and wellbeing projects. For example, you might think that:

  • The direct impact of these projects can't be beaten. That is, longtermist causes simply aren't important enough to deserve all our resources.
  • The experience from these projects are the best ways of helping us to learn how to actually get things done in the world.
  • Having a track record of doing good things will get the movement more people, money, trust and influence than other things.
  • Having a broad EA movement is valuable, perhaps because it makes it easier for us to spot the best opportunities and change course.

I would have preferred for the article to argue more directly for some of these as the actual reason that it's good EA has not deprioritised global health and development.

Ah jtm has written a comment mentioning some similar points before I refreshed the page!

jtm
16
0
0

Thanks for sharing this!

I think this quote from Piper is worth highlighting:

(...) if the shift to longtermism meant that effective altruists would stop helping the people of the present, and would instead put all their money and energy into projects meant to help the distant future, it would be doing an obvious and immediate harm. That would make it hard to be sure EA was a good thing overall, even to someone like me who shares its key assumptions.


I broadly agree with this, except I think the first "if" should be replaced with "insofar as."  Even as someone who works full-time on existential risk reduction, it seems very clear to me that longtermism is causing this obvious and immediate harm; the question is whether that harm is outweighed by the value of pursuing longtermist priorities. 

GiveWell growth is entirely compatible with the fact that directing resources toward longtermist priorities means not directing them toward present challenges. Thus, I think the following claim by Piper is unlikely to be true:

My main takeaway from the GiveWell chart is that it’s a mistake to believe that global health and development charities have to fight with AI and biosecurity charities for limited resources.

To make that claim, you have to speculate about the counterfactual situation where effective altruism didn't include a focus on longtermism.  E.g., you can ask:

  1. Would major donors still be using the principles of effective altruism for their philanthropy? 
  2. Would support for GiveWell charities have been even greater in that world? 
  3. Would even more people have been dedicating their careers to pressing current challenges like global development and animal suffering?  

My guess is that the answer to all three is "yes", though of course I could be wrong and I'd be open to hear arguments to the contrary. In particular, I'd love to see evidence for the idea of a 'symbiotic' or synergistic relationship. What are the reasons to think that the focus on longtermism has been helpful for more near-term causes?  E.g., does longtermism help bring people on board with Giving What We Can who otherwise wouldn't have been? I'm sure that's the case for some people, but how many? I'm genuinely curious here!

To be clear, it's plausible that longtermism is extremely good for the world all-things-considered and that longtermism can coexist with other effective altruism causes. 

But it's very clear that focusing on longtermism trades off against focusing on other present challenges, and it's critical to be transparent about that. As Piper says, "prioritization of causes is at the heart of the [effective altruism] movement."

jtm
11
0
0

In a nutshell: I agree that caring about the future doesn't mean ignoring the present. But it does mean deprioritising the present, and this comes with very real costs that we should be transparent about.

I like this piece, but I think it misses an opportunity to comment more broadly on the dynamic at work. My own impression can be glossed in roughly this way: most money goes to the here and now, most careers go to the future (not an overwhelming majority in either case though, and FTX may have changed the funding balance). This makes sense based on talent versus funding gaps, and means the two don’t really need to compete much at all, indeed many of the same people contribute to both in different ways.

Curated and popular this week
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 7m read
 · 
Crossposted from my blog.  When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: > Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. > > Ronny: Oh, so you're helping refugees? > > Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air).   But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born.   I especially love that they bring on an EA critic who is expected to criticize shrimp welfare (Ronny primes her with the declaration “fuck these shrimp”) but even she is on board with the shrimp welfare project. Her reaction to the shrimp welfare project is “hey, that’s great!” In the Bible story of Balaam and Balak, Balak King of Moab was peeved at the Israelites. So he tries to get Balaam, a prophet, to curse the Israelites. Balaam isn’t really on board, but he goes along with it. However, when he tries to curse the Israelites, he accidentally ends up blessing them on grounds that “I must do whatever the Lord says.” This was basically what happened on the Daily Show. They tried to curse shrimp welfare, but they actually ended up blessing it! Rumor has it that behind the scenes, Ronny Chieng declared “What have you done to me? I brought you to curse my enemies, but you have done nothing but bless them!” But the EA critic replied “Must I not speak what the Lord puts in my mouth?”   Chieng by the end was on board with shrimp welfare! There’s not a person in the episode who agrees with the failed shrimp torture apologia of Very Failed Substacker Lyman Shrimp. (I choked up a bit at the closing song about shrimp for s