I am a rising junior at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship.
I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu.
Reach out to me via email @ dnbirnbaum@uchicago.edu
If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!
I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)
Sorry if it wasn't clear -- this is literally just the moral case intuition, and the numbers are just meant to reflect another moral intuition that your curve can either align with or not.
Some concrete decision would be based on how one weights simplicity mathematically vs fitting data, etc. I wanted to stay agnostic about it in this post.
I think I disagree with this last point in that -- it looks like threshold deontology is doing something like I am (in that it is giving two principles instead of one to fit more data), but it is often not cashing it out this way which makes it hard to figure out where you should start being more conseqentialist. One interpretation of what this proposal does is it makes it more explicit (given assumptions), so you know exactly where you're going to jump from deontic constraints to consequences.
Like I said in the post, I think that this graph definitely doesn't reflect all the complexities of normative theory building -- it was a mere metaphor/ very toy example. I do think that even if you think the graphic metaphor is merely that (a metaphor), you can still take my proposal conceptually seriously (as in, accept that there's some trade-off here, and plausibly case intuitions can outweigh general principles.
Great post.
Adding on one of the points mentioned: I think that if you are driven to make AI go well because of EA, you’d probably like to do this in a very specific way (ie big picture: astronomical waste, x risks are way worse than catastrophic risks, avoiding s risks; smaller picture: what to prioritize in AIS, ect). This, I think, means that you want people (or at least the most impactful people) in the field to be ea/ea-adj (because what are the odds the values of an explicitly moral normie and EA will be perfectly correlated on the actionable things that really matter?).
Another related point is that a bunch of people might join AIS for clout/(future) power (perhaps not even consciously; finding out your real motivations are hard until there are big stakes!) and having been an EA for a bunch of time (and having shown flexibility about cause prio) before AIS is a good signal that you’re not (not a perfect one but some substantial evidence imo)
It depends on the case, but there are definitely cases where I would.
Also, while you make a good point that these can sometimes converge, I think the priority of concerns is extremely different under shortermism vs longtermism, which i see as the important part of "most important determinant of what we ought to do." (Setting aside mugging and risk aversion/robustness) some very small or even directional shift could make something hold the vast majority of your moral weight, as opposed to before where the impact might have not been that big/ impact would have been outweighed by lack of neglectedness or tractability.
P.S. If one (including myself) failed to do x, given that it would shift priorities but wouldn't affect what one would do in light of short term damage, I think that would say less about one's actual beliefs and more about their intuitions of disgust towards means-end-reasoning - but this is just a hunch and somewhat based on my own introspection (to be fair, sometimes this comes from moral uncertainty/reputational concerns that should be used in this reasoning and is to your point).
Good post!
One possible complication I didn’t see addressed is the role of cause saturation.
Suppose that the most effective global health charities (or interventions more generally) are likely to become saturated over time as EA grows, more money flows in, and people “get their act together.” If that saturation happens on, say, a 5-year timeline, then delaying donations means missing the chance to fund the top-marginal opportunities now. Even if you plan to donate to the “next best” charity later, there’s a real cost: the best opportunities would have had 5 extra years of impact that are now lost.
In other words, for waiting to be better, the investment return advantage must outweigh not just the growth in baseline incomes, but also the lost value of funding the most cost-effective opportunities before they close.
Im not sure how likely this sorta thing is in practice, but i thought it was worth a note.
Far-future effects are the most important determinant of what we ought to do
I agree with a bunch of the skepticism in the comments made, but (to me) it seems like there are not enough people on the strong longtermist side.
A couple points responding to some of the comments:
On the other hand, a one point against that I don't think was brought up:
Thank you for the post - and holy shit that’s a lot of outreach.
I appreciate the appropriate hedging, but I would be even more hedged.
1) it’s unclear how much of this stuff generalizes. For some universities, general meetings are amazing, and for others, they’re boring. I think it’s pretty difficult to make generalized claims about this from a sample size of universities of about 1 (especially when there’s counter evidence - for instance, the amazing organizers at Yale EA just got 70 intro fellowship apps from just doing TONS of outreach. That’s a large number and is possibly the equivalent update in the opposite direction).
2) (controversial and speculative take incoming) maybe scaling past some numver of people (depending on university) is just very difficult at below top 30 universities (or that each university has some number like this, but if this is true, it feels like a higher cap would be correlated with prestige). In your case, perhaps there is some amount of potential EAs, you have good initial selection effects, and you just captured most people from that.
Would be happy to get pushback on all of this, though.
The curve is not measuring things in value but rather intuitive pull according to this data--simplicity trade-off!