I am a PhD candidate in Economics at Stanford University. Within effective altruism, I am interested in broad longtermism, long-term institutions and values, and animal welfare. In economics, my areas of interest include political economy, behavioral economics, and public economics.
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn't accept fanatical views to prioritise them
I think the argument you linked to is reasonable. I disagree, but not strongly. But I think it's plausible enough that AGI concerns (from an impartial cause prioritization perspective) require fanaticism that there should still be significant worry about it. My take would be that this worry means an initially general EA org should not overwhelmingly prioritize AGI.
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it's also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier "From an altruistic cause prioritization perspective" because I think that from an impartial cause prioritization perspective, the case is different. If you're comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
I'm not sure exactly what this change will look like, but my current impression from this post leaves me disappointed. I say this as someone who now works on AI full-time and is mostly persuaded of strong longtermism. I think there's enough reason for uncertainty about the top cause and value in a broad community that central EA organizations should not go all-in on a single cause. This seems especially the case for 80,000 Hours, which brings people in by appealing to a general interest in doing good.
Some reasons for thinking cause diversification by the community/central orgs is good:
I'm also a bit confused because 80K seemed to recently re-elevate some non-existential risk causes on its problem profiles (great power war and factory farming; many more under emerging challenges). This seemed like the right call and part of a broader shift away from going all-in on longtermism in the FTX era. I think that was a good move and that keeping an EA community that is not only AGI is valuable.
I think I agree with the Moral Power Laws hypothesis, but it might be irrelevant to the question of whether to try to improve the value of the future or work on extinction risk.
My thought is this: the best future is probably a convergence of many things going well, such as people being happy on average, there being many people, the future lasting a long time, and maybe some empirical/moral uncertainty stuff. Each of these things plausibly has a variety of components, creating a long tail. Yet you'd need expansive, simultaneous efforts on many fronts to get there. In practice, even a moderately sized group of people is only going to make a moderate to small push on a single front, or very small pushes on many fronts. This means the value we could plausibly affect, obviously quite loosely speaking, does not follow a power law.
The value of the future conditional on civilization surviving seems positive to me, but not robustly so. I think the main argument for its being positive is theoretical (e.g., Spreading happiness to the stars seems little harder than just spreading), but the historical/contemporary record is ambiguous.
The value of improving the future seems more robustly positive if it is tractable. I suspect it is not that much less tractable than extinction risk work. I think a lot of AI risk satisfies this goal as well as the x-risk goal for reasons Will MacAskill gives in What We Owe the Future. Understanding, developing direct interventions for, and designing political processes for digital minds seem like plausible candidates. Some work on how to design democratic institutions in the age of AI also seems plausibly tractable enough to compete with extinction risk.
This is a stimulating and impressively broad post.
I want to press a bit on whether these trends are necessarily bad—I think they are, but there are a few reasons why I wonder about it.
1) Secrecy: While secrecy makes it difficult or impossible to know if a system is a moral patient, it also prevents rogue actors from quickly making copies of a sentient system or obtaining a blueprint for suffering. (It also prevents rogue actors from obtaining a blueprint for flourishing, which supports your point.) How do you think about this?
2 and 3) If I understand correctly, the worry here is that AI multiplies at a speed that outpaces our understanding, making it less likely that humanity handles digital minds wisely. Some people are bullish on digital minds (i.e., think they would be good in and of themselves). Some also think other architectures would be more likely to be sentient than transformers. Wider exploration and AI-driven innovation plausibly have the effect of just increasing the population of digital minds. How do you weigh this against the other considerations?
What seems less likely to work?
- Work with the EU and the UK
- Trump is far less likely to take regulatory inspiration from European countries and generally less likely to regulate. On the other-hand perhaps under a 2028 Dem administration we would see significant attention on EU/UK regulations.
- The EU/UK are already scaling back the ambitions of their AI regulations out of fear that Trump would retaliate if they put limits on US companies.
Interesting—I've had the opposite take for the EU. The low likelihood of regulation in the US seems like it would make EU regulation more important since that might be all there is. (The second point still stands, but it's still unclear how much that retaliation will happen and what impact it will have.)
It depends on aspects of the Brussels' effect, and I guess it could be that a complete absence of US regulation means companies just pull out of the EU in response to regulation there. Maybe recent technical developments make that more likely. On net, I'm still inclined to think these updates increase the importance of EU stuff.
For the UK, I think I'd agree—UK work seems to get a lot of its leverage from the relationship with the US.
Yeah, FWIW, it's mine too. Time will tell how I feel about the change in the end. That EA Forum post on the 80K-EA community relationship feels very appropriate to me, so I think my disagreement is about the application.