Hello! I work on AI grantmaking at Open Philanthropy.
All posts in a personal capacity and do not reflect the views of my employer, unless otherwise stated.
Glad you shared this!
Expanding a bit on a comment I left on the google doc version of this: I broadly agree with your conclusion (longtermist ideas are harder to find now than in ~2017), but I don't think this essay collection was a significant update towards that conclusion. As you mention as a hypothesis, my guess is that these essay collections mostly exist to legitimise discussing longtermism as part of serious academic research, rather than to disseminate important, plausible, and novel arguments. Coming up with an important, plausible, and novel argument which also meets the standards of academic publishing seems much harder than just making some publishable argument, so I didn't really change my views on whether longtermist ideas are getting harder to find because of this collection's relative lack of them. (With all the caveats you mentioned above, plus: I enjoyed many of the reprints, and think lots of incrementalist research can be very valuable —it's just not the topic you're discussing.)
I'm not sure how much we disagree, but I wanted to comment anyway, in case other people disagree with me and change my mind!
Relatedly, I think what I'll call the "fundamental ideas" — of longtermism, AI existential risk, etc — are mildly overrated relative to further arguments about the state of the world right now, which make these action-guiding. For example, I think longtermism is a useful label to attach to a moral view, but you need further claims about reasons not to worry about cluelessness in at least some cases, and also potentially some claims about hinginess, for it to be very action-relevant. A second example: the "second species" worry about AIXR is very obvious, and only relevant given that we're in a world where we're plausibly close to developing TAI soon and, imo, because current AI development is weird and poorly understood; evidence from the real world is a potential defeater for this analogy.
I think you're using "longtermist ideas" to also point at this category of work (fleshing out/adding the additional necessary arguments to big abstract ideas), but I do think there's a common interpretation where "we need more longtermist ideas" translates to "we need more philosophy types to sit around and think at very high levels of abstraction". Relative to this, I'm more into work that gets into the weeds a bit more.
Cool, thanks for sharing!
I currently use Timing.app, and have been recommending it to people. Is donethat different in any ways? (TBC, "it has all the same features but also supports an E2G effort" would be sufficient reason for me to consider switching).
Yes, at least initially. (Though fwiw my takeaway from that was more like, "it's interesting that these people wanted to direct their energy towards AI safety community building and not EA CB; also, yay for EA for spreading lots of good ideas and promoting useful ways of looking at problems". This was in 2022, where I think almost everyone who thought about AI safety heard about it via EA/rationalism.)
Interesting post, thanks for sharing. Some rambly thoughts:[1]
I would have liked to make this more coherent and focused, but that was enough time/effort that realistically I just wouldn't have done it, and I figured a rambly comment was better than no comment.
Could you say a bit more about the power law point?
A related thing I've been thinking about is that some kinds of deep democracy and some kinds of better futures-style reasoning (for sufficiently risk-neutral, utilitarian, confident in their moral views, etc etc etc kinds of agents, assume all the necessary qualifiers here) will end up being in tension — after all, why compromise between lots of moral views when this means you miss out on a bunch of feasible moral value? (More precisely, why choose the compromise it's-just-ok future when you could optimise really hard according to the moral view you favour and have some small chance of getting almost all feasible value?)
I think that some versions of the power law point might make moral compromise look more appealing, which is why I'm interested. (I'm personally on team compromise!)
I am too young and stupid to be giving career advice, but in the spirit of career conversations week, I figured I'd pass on advice I've received which I ignored at the time, and now think was good advice: you might be underrating the value of good management!
I think lots of young EAish people underrate the importance of good management/learning opportunities, and overrate direct impact. In fact, I claim that if you're looking for your first/second job, you should consider optimising for having a great manager, rather than for direct impact.
Why?
How can you tell if someone will be a great manager?
(My manager did not make me post this)
Yep, I was being imprecise. I think the most plausible (and actually believed-in) alternative to longtermism isn't "no care at all for future people", but "some >0 discount rate", and I think xrisk reduction will tend to look good under small >0 discount rates.
I do also agree that there are some combinations of social discount rate and cost-effectiveness of longtermism, such that xrisk reduction isn't competitive with other ways of saving lives. I don't yet think this is clearly the case, even given the numbers in your paper — afaik the amount of existential risk reduction you predicted was pretty vibes-based, so I don't really take the cost-effectiveness calculation it produces seriously. (And I haven't done the math myself on discount rates and cost-effectiveness.)
Even if xrisk reduction doesn't look competitive with e.g. donating to AMF, I think it would be pretty reasonable for some people to spend more time thinking about it to figure out if they could identify more cost-effective interventions. (And especially if they seemed like poor fits for E2G or direct work.)