"Most expected value is in the far future." Because there are so many potential future lives, the value of the far future dominates the value of any near-term considerations.
Why this needs to be retired: just because a cause has high importance doesn't mean it has high tractability and low crowdedness. It could (and hopefully will soon) be the case that the best interventions for improving the far future are fully funded, and the next best intervention is highly intractable. Moreover, for optimally allocating the EA budget, we care about the expected value of the marginal action, and not average expected value.
"What matters most about our actions is their very long term effects."
Why this needs to be retired: there are only a small number of actions where we have a hope of reasonably estimating long-term effects, namely, actions affecting lock-in events like extinction or misaligned AGI spreading throughout the universe. For all other actions, estimating long-term effects is nearly impossible. Hence, this is not a practical rule to follow.
How do you understand the claim about expected value? What is the expectation being taken over?
What are some examples of such proxies?
Why would we care about a hypothetical scenario where we're omniscient? Shouldn't we focus on the actual decision problem being faced?
Over my probability distribution for the future. In my expected/average future, almost all lives/experiences/utility/etc are in the long-term future. Moreover, the variance in values of such a variable between possible futures is almost entirely due to differences in the long-term future.
Sure, for the sake of making decisions. For the sake of abstract propositions about "what matters most," it's not necessarily constrained by what we know.
Okay, so you're thinking about what an outside observer would expect to happen. (Another approach is to focus on a single action A, and think about how A affects the long-run future in expectation.)
Coming back to this, in my experience the quote is used to express what we should do; it's saying we should focus on affecting the far future, because that's where the value is. It's not merely pointing out where the value is, with no reference to being actionable.
To give a contrived example: suppose there's a civilization in a galaxy far away that's immeasurably larger than our total potential future, and we can give them ~infinite utility by sending them one photon. But they're receding from us faster than the speed of light, so there's nothing we can do about it. Here, all of the expected value is in this civilization, but it has no bearing on how the EA community should allocate our budget.
I just don't think MacAskill/Greaves/others intended this to be interpreted as a perfect-information scenario with no practical relevance.
What do you think about MacAskill's claim that "there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear."?
I mostly agree that obviously great stuff gets funding, but I think the "marginal stuff" is still orders of magnitude better in expectation than almost any neartermist interventions.
Do you disagree with FTX funding lead elimination instead of marginal x-risk interventions?
Not actively. I buy that doing a few projects with sharper focus and tighter feedback loops can be good for community health & epistemics. I would disagree if it took a significant fraction of funding away from interventions with a more clear path to doing an astronomical amount of good. (I almost added that it doesn't really feel like lead elimination is competing with more longtermist interventions for FTX funding, but there probably is a tradeoff in reality.)
I was just about to make all three of these points (with the first bullet containing two), so thank you for saving me the time!
I'm unsure if I agree or not. I think this could benefit from a bit of clarification on the "why this needs to be retired" parts.
For the first slogan, it seems like you're saying that this is not a complete argument for longtermism - just because the future is big doesn't mean its tractable, or neglected, or valuable at the margin. I agree that it's not a complete argument, and if I saw someone framing it that way I would object. But I don't think that means we need to retire the phrase unless we see it being constantly used as a strawman or something? It's not complete, but it's a quick way to summarize a big part of the argument.
For the second one, it sounds like you're saying this is misleading - it doesn't accurately represent the work being done, which is mostly on lock-in events, not affecting the long-term future. This is true, but it takes only one extra sentence to say "but this is hard so in practice we focus on lock-in". It's a quick way to summarize the philosophical motivations, but does seem pretty detached from practice.
I think my takeaway from thinking thru this comment is this:
I do often see it used as an argument for longtermism, without reference to tractability.
So: "What matters most about our actions is their very long term effects, but this is hard so in practice we focus on lock-in".
But why bother making the claim about our actions in general? It seems like an attempt to make a grand theory where it's not warranted.
I think the existence of investing for the future as a meta option to improve the far future essentially invalidates both of your points. Investing money in a long-term fund won’t hit diminishing returns anytime soon. I think of it as the “Give Directly of longtermism”.
I'd be interested to see the details. What's the expected value of a rainy day fund, and what factors does it depend on?
Founders Pledge's Investing to Give report is an accessible resource on this.
I wrote a short overview here.
Do you think FTX funding lead elimination is a mistake, and that they should do patient philanthropy instead?
Well I’d say that funding lead elimination isn’t longtermist all other things equal. It sounds as if FTX’s motivation for funding it was for community health / PR reasons in which case it may have longtermist benefits through those channels.
Whether longtermists should be patient or not is a tricky, nuanced question which I am unsure about, but I would say I’m more open to patience than most.
You might be interested in checking out a GPI paper which argues the same thing as your second point: The Scope of Longtermism
Here's the full conclusion:
I think my takeaway from this slogan is: given limited evaluation capacity + some actions under consideration, a substantial proportion of this capacity should be debited to thinking about long term effects.
It could be false: maybe it's easy to conclude that nothing important can be known about the long term effects. However, I don't think this has been demonstrated yet.
I would flip it around: we should seek out actions that have predictable long-term effects. So, instead of starting from the set of all possible actions and estimating the long-term effects for each one (an impossible task), we would start by restricting the action space to those with predictable long-term effects.
How about this:
A) Take top N interventions ranked by putting all effort into far future effects
B) Take top N interventions ranked by putting more effort into near than far future effects
(you can use whatever method you like to prioritise the interventions you investigate). Then for most measures of value, group (A) will have much higher expected value than group (B). Hence "most of the expected value is in the far future".
Your initial comment was about slogan2 ("What matters most about our actions is their very long term effects"). I continue to think that this is not a useful framing. Some of our actions have big and predictable long-term effects, and we should focus on those. But most of our actions don't have predictable long-term effects, so we shouldn't be making generic statements about the long-term effects of an arbitrary action.
Re slogan1 ("Most expected value is in the far future"), it sounds like you're interpreting it as being about the marginal EV of an action. I agree that it's possible for the top long-term focused interventions to currently have a higher marginal EV than near-term focused interventions. But as these interventions are funded, I expect their marginal EV to decline (ie. diminishing returns), possibly to a value lower than the marginal EV of near-focused interventions.