Hello again! A few months ago I posted A case against strong longtermism and it generated quite a lot of interesting feedback. I promised to write a response "in a few weeks", where by "few" I meant 9. 

Anyway, the response  ballooned out into multiple posts, and so this piece is the first in a three-part series. In the next post I'll discuss alternatives to decision theory, and the post after that will be on the subject of knowledge and long-term prediction. 

Looking forward to the discussion!

https://vmasrani.github.io/blog/2021/proving_too_much/

Comments7


Sorted by Click to highlight new comments since:

Nice post and useful discussion. I did think this post would be a meta-comment about the EA forum, not a (continued) discussion of arguments against strong longtermism. 

If, between your actions, you can carve out the undefined/infinite welfare parts so that they're (physically) subjectively identically distributed, then you can just ignore them, as an extension of expected value maximizing total utilitarianism, essentially using a kind of additivity/separability axiom. For example, if you're choosing between two actions A and B, and their payoffs are distributed like

A: X + Z, and

B: Y + Z,

then you can just ignore Z and compare the expected values of X and Y, even if Z is undefined or infinite, or its expectation is undefined or infinite. I would only do this if Z actually represents essentially the same distribution of local events in spacetime for each of A and B, though, since otherwise you can include more or less into X and Y arbitrarily and independently, and the reduction isn't unique.

Unfortunately, I think complex cluelessness should usually prevent us from being able to carve out matching problematic parts so cleanly. This seems pretty catastrophic for any attempts to generalize expected utility theory, including using stochastic dominance.

EDIT: Hmm, might be saved in general even if A's and B's Zs are not identical, but similar enough so that their expected difference is dominated by the expected difference between X and Y. You'd be allowed to the two Zs dependence on each other to match as closely as possible, as long as you preserve their individual distributions.

Technical nitpick: I don't think it's the fact that the set of possible futures is infinite that breaks things, it's the fact that the set of possible futures includes futures which differ infinitely in their value, or have undefined values or can't be compared, e.g. due to infinities, or conditional convergence and no justifiably privileged summation order. Having just one future with undefined value, or a future with  and another with  is enough to break everything; that's only 1 or 2 futures. You can also have infinitely many futures without things breaking, e.g. as long as the expectations of the positive and negative parts are finite, which doesn't require bounded value, but is guaranteed by it.

If a Bayesian expected utility maximizing utilitarian accepts Cromwell's rule, as they should, they can't rule out infinities, and expected utility maximization breaks. Stochastic dominance generalizes EU maximization and can save us in some cases.

Both actually! See section 6 in Making Ado Without Expectations - unmeasurable sets are one kind of expectation gap (6.2.1) and 'single-hit' infinities are another (6.1.2)

When would you need to deal with unmeasurable sets in practice? They can't be constructed explicitly, i.e. with just ZF without the axiom of choice, at least for the Lebesgue measure on the real numbers (and I assume this extends to , but I don't know about infinite-dimensional spaces). I don't think they're a problem.

You're correct, in practice you wouldn't - that's the 'instrumentalist' point made in the latter half of the post 

Thanks for posting a follow-up. My understanding of your claim is something like:

It's true that there is a nonzero probability of infinitely good or bad things happening over any timescale, making expected value calculations equally meaningless for short-term and long-term decisions. However, it's fine to just ignore those infinities in the short-term but incorrect to ignore them in the long term. Therefore, short-term thinking is okay but long-term thinking is not.

Is that accurate? If so, could you elaborate on why you see this distinction?

I see no particular reason to think Pasadena games are more likely one thousand years from now than they are today (and indeed even using the phrase "more likely today" seems to sink the approach of avoiding probability).

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 2m read
 · 
TL;DR Starting an Effective Altruism (EA) group might be one of the highest-impact opportunities available right now. Here’s how you can get involved: * University students: Apply to the Centre for Effective Altruism’s Organiser Support Programme (OSP) by Sunday, June 22. * City or national group organisers: You’re invited, too. See details here! * Interested in mentorship? Apply to mentor organisers by Wednesday, June 18. * Know someone who could be an organiser or mentor? Forward this post or recommend them directly. OSP provides mentorship, workshops, funding support, and practical resources to build thriving EA communities. Why Starting an EA Group Matters EA Groups, especially university groups, are often the very first exposure people have to effective altruism principles such as scale, tractability, and neglectedness. One conversation, one fellowship, one book club - these seemingly small moments can reshape someone’s career trajectory. Changing trajectories matters - even if one person changes course because of an EA group and ends up working in a high-impact role, the return on investment is huge. You don’t need to take our word for it: * 80,000 Hours: "Probably one of the highest-impact volunteer opportunities we know of." * Rethink Priorities: Only 3–7% of students at universities have heard of EA, indicating neglectedness and a high potential to scale. * Open Philanthropy: In a survey of 217 individuals identified as likely to have careers of particularly high altruistic value from a longtermist perspective, most respondents reported first encountering EA ideas during their college years. When asked what had contributed to their positive impact, local groups were cited most frequently on their list of biggest contributors. This indicates that groups play a very large role in changing career trajectories to high-impact roles. About the Organiser Support Programme (OSP) OSP is a remote programme by the Centre for Effective Altruism designed
 ·  · 11m read
 · 
Summary The purpose of this post is to summarize the achievements and learnings at Impact Ops in its first two years. Impact Ops provides consultancy and hands-on support to help high-impact organizations upgrade their operations. We’ve grown from three co-founders to a team of 11 specialists and supported 50+ high-impact organizations since our founding in April 2023. We deliver specialist operations services in areas where we have deep experience, including finance, recruitment, and entity setup. We have 50+ active clients who we’ve helped tackle various operational challenges. Besides our client work, we’re pleased to have contributed to the broader nonprofit ecosystem in several ways, including via free resources. We’re also proud to have built a sustainable business model that doesn’t rely on continuous fundraising. We’ll share details about our services, projects, and business model in what follows, including our key takeaways and what’s next for Impact Ops! What is Impact Ops? Impact Ops is an operations support agency that delivers services to nonprofit organizations.  Our mission is to empower high-impact projects to scale and flourish. We execute our mission by delivering specialist operations services in areas where we have deep experience, including finance, recruitment, and entity setup. Our team has extensive experience within nonprofit operations. Collectively, we have: * 50+ years’ experience working at nonprofits (incl. Effective Ventures, CEA, Panorama Global, Anti Entropy, Code For Africa, Epistea, and the Marine Megafauna Foundation) * 50+ further years’ experience working in related roles outside the nonprofit community, including COO, recruitment, and accounting positions. These figures underrepresent our collective relevant experience, as they exclude time spent supporting nonprofit organizations via Impact Ops (10 years collectively) and working for other consultancies (incl. PwC, EY, BDO, and Accenture). If it sounds like we're pr
Recent opportunities in Building effective altruism