Postdoc at the Digital Economy Lab, Stanford, and research affiliate at the Global Priorities Institute, Oxford. I'm slightly less ignorant about economic theory than about everything else.
I agree that the question of when to give is very important, and that it's often underappreciated how strong of a reason compound interest is for giving later. This seems like a subject people in EA rediscover every few years and then largely forget about--it's a shame that the intricacies of the arguments back and forth get lost in the process, but good to see that people stay interested in thinking this through.
If it's helpful at all, here's a relatively comprehensive writeup of my own thoughts on the subject from three years ago (and an even older talk and podcast, for the more audio-visually inclined; and a fund some people at Founders Pledge set up for those interested in committing to long-term saving). You can also find various objections to (and elaborations on) this material from others on the EA Forum at that time, many of them excellent. I do agree with the common response that the prospect of near-term transformative AI strengthens the case for giving sooner, though not quite as straightforwardly or extremely as it might seem at first, and in fact I'm currently in the middle of writing up some thoughts on this front. But in the meantime, let me just pitch checking out the old commentary. : )
Thanks, that's a great blog post and very relevant!
I don't think I agree with Robin's proposal that the main reason for all this product proliferation is that we want to have unique items--it seems to me that we have a lot of proliferation in domains where we aren't particularly keen on expressing ourselves, like brands of pencils, consistent with the standard explanation that each entrepreneur needs a tiny bit of market power to profit from his/her innovation. But whatever the reason, I agree that Robin could well be right that in some sense we get way too much (trivially distinct) product variety by default.
It does :) This may also be relevant.
I don't see the justification for donating the interest. If we think the marginal utility of the poor will fall more slowly than the interest rate, as it typically will if the poor (and others spending on them) discount the future and are aware of how much consumption they'll have in the future, then it's optimal to save everything including the interest, until the two are growing at the same rate.
Fair enough, I think the lack of a direct response has been due to an interaction between the two things. At first, people familiar with the existing arguments didn't see much to respond to in David's arguments, and figured most people would see through them. Later, when David's arguments had gotten around more and it became clear that a response would be worthwhile (and for that matter when new arguments had been made which were genuinely novel), the small handful of people who had been exploring the case for longtermism had mostly moved on to other projects.
I would disagree a bit about why they moved on, though: my impression is that the bad association with FTX the word "longtermism" got was only slightly responsible for their shift in focus, and the main driver was just that faster-than-expected AI progress mostly convinced them that the most valuable philosophy work to be done was more directly AI-related.
Thanks for saying a bit more about how you’re interpreting “scope of longtermism”. To be as concrete as possible, what I'm assuming is that we both read Thorstad as saying “a philanthropist giving money away so as to maximize the good from a classical utilitarian perspective” is typically outside the scope of decision-situations that are longtermist, but let me know if you read him differently on that. (I think it’s helpful to focus on this case because it’s simple, and the one G&M most clearly argue is longtermist on the basis of those two premises.)
It’s a tautology that the G&M conclusion that the above decision-situation is longtermist follows from the premises, and no, I wouldn't expect a paper disputing the conclusion to argue against this tautology. I would expect it to argue, directly or indirectly, against the premises. And you’ve done just that: you’ve offered two perfectly reasonable arguments for why the G&M premise (ii) might be false, i.e. giving to PS/B612F might not actually do 2x as much good in the long term as the GiveWell charity in the short term. (1) In footnote 2, you point out that the chance of near-term x-risk from AI may be very high. (2) You say that the funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met. You also suggest in footnote 3 that maybe NGOs will do a worse job of it than the government.
I won’t argue against any of these possibilities, since the topic of this particular comment thread is not how strong the case for longtermism is all things considered, but whether Thorstad’s “Scope of LTism” successfully responds to G&M’s argument. I really don't think there's much more to say. If there’s a place in “Scope of LTism” where Thorstad offers an argument against (i) or (ii), as you’ve done, I’m still not seeing it.
First, to clarify, Greaves and MacAskill don’t use the Spaceguard Survey as their example. They use giving to the Planetary Society or B612 Foundation as their example, which do similar work.
Could you spell out what you mean by “the actual scope of longtermism”? In everyday language this might sound like it means “the range of things it’s justifiable to work on for the sake of improving the long term”, or something like that, but that’s not what either Thorstad or Greaves and MacAskill mean by it. They mean [roughly; see G&M for the exact definition] the set of decision situations in which the overall best act does most of its good in the long term.
Long before either of these papers, people in EA (and of course elsewhere) had been making fuzzy arguments for and against propositions like “the best thing to do is to lower x-risk from AI because this will realize a vast and flourishing future”. The project G&M, DT, and other philosophers in this space were engaged in at the time was to go back and carefully, baby step by baby step, formalize the arguments that go into the various building blocks of these "the best thing to do is..." conclusions, so that it’s easier to identify which elements of the overall conclusion follow from which assumptions, how someone might agree with some elements but disagree with others, and so on. The “[scope of] longtermism” framing was deliberately defined broadly enough that it doesn’t make claims about what the best actions are: it includes the possibility that giving to the top GiveWell charity is the best act because of its long-term benefits (e.g. saving the life of a future AI safety researcher).
The Case offers a proof that if you accept the premises (i) giving to the top GiveWell charity is the way to do the most good in the short term and (ii) giving to PS/B612F does >2x more good [~all in the long term] than the GiveWell charity does in the short term, then you accept (iii) that the scope of longtermism includes every decision situation in which you’re giving money away. It also argues for premise (ii), semi-formally but not with anything like a proof.
Again, the whole point of doing these sorts of formalizations is that it helps to sharpen the debate: it shows that a response claiming that the scope of longtermism is actually narrow has to challenge one of those premises. All I’m pointing out is that Thorstad's “Scope of Longtermism” doesn’t do that. You’ve done that here, which is great: maybe (ii) is false because giving to PS/B612F doesn’t actually do much good at all.
Almost all longtermists think that some interventions are better than asteroid monitoring. To be conservative and argue that longtermism is true even if one disagrees with the harder-to-quantify interventions most longtermists happen to favor, the Case uses an intervention with low but relatively precise impact, namely asteroid monitoring, and argues that it does more than 2x as much good in the long term as the top GiveWell charity does in the short term.
Thanks for sharing! I wasn't aware of the case for thinking that the right hemisphere has so much less welfare capacity than the left. If this is true, it leaves me thinking that the sum of the welfare capacities of the two parts of the split brain patient is significantly less than 1, rather than the 0.99 I went with in the example.
It's interesting that this EA Forum post forms so much of the basis for its answer to the second question. I wonder if it's because so little has been written on this, or just because the way you asked the question used language especially similar to this post.
Even though the Gemini report seems to represent my view (what it calls the "divisive model") and Fischer's view (the "additive model") well at first, it gets pretty confused in a few places:
It then says it
(and also rejects the "Strict Additive" model), and goes with what it frames as something in the middle, but which is fully the additive model after adjusting for the fact that, in its view, the right hemisphere lacks various capacities. -- But, despite the counterintuitive terminology it's chosen, it's the Additive view, not the Divisive view, on which "pain dilutes with volume". The Additive view says that total pain falls if you reconnect the two hemispheres of a split-brain patient in the ice bath, because a welfare subject's pain is something like an average of pain across the phenomenal field rather than a sum.