Hello everyone:
TL;DR:
It seems the majority of EAs dedicated to AI s-risks can’t be hired or funded by EA-aligned organizations. For these EAs, they must weigh earn-to-give against direct work in the non-EA world. Therefore, assessing the feasibility of contributing to reducing AI s-risks in the non-EA world would be valuable to many facing career decisions. Even if you lack complete thoughts, a 1-minute gut intuition reply would really help.
Main reasoning: My reasoning is roughly as follows:
We can contribute through direct work and earning to give. Direct work divides into EA world (mostly non-profits needing donations, such as Center for Long-Term Risk, Center for Reducing Suffering) and non-EA world (mostly for-profit companies or government positions).
However, EA-world jobs/grants (such as in CLR, CRS, or s-risks independent researcher) are super competitive. Only the top 10-20% talented people can get these opportunities. Therefore, we should seriously consider feasibility of working in the non-EA world.
Yet, contributing to reducing AI s-risks in the non-EA world (especially technical safety) seems intuitively difficult to me. The root reason is that most AI companies don’t prioritize s-risks or altruism; they focus on profit and AI capabilities. Thus, an AI engineer aiming to reduce s-risks in non-EA companies would face two situations:
(1) Research constraints
If trying to research reducing AI risks like a CLR researcher: Suppose you work at OpenAI, but your manager wants focus on advancing AI capabilities or safety related to human extinction risks—not digital suffering. If you spend most work time on digital suffering instead of capabilities, you won’t meet KPIs and might be fired for not working hard on capability tasks.
(2) Safety-tax constraints
If trying to implant s-risk-reducing designs in frontier AI systems: Most such designs seem to have high safety tax (though I’m quite uncertain). Companies wouldn’t allow them, as they hugely cut AI capabilities.
If these are true, earning more to fund EA-world researchers—who could research making s-risk interventions lower in safety tax—might have higher marginal value than struggling for impact in non-EA world.
(However, these scenarios are just imaginings from an AI layperson and is probably woefully off; please feel free to correct me.)
The key comparison question:
In conclusion, I’m curious about people’s rough intuitions on this stylized comparison: I believe many in the community will face similar decisions in the future:
1. Work as a dentist, earn $200,000/year, and contribute purely via earning to give.
2. Work as a software engineer in a non-EA organization (not a frontier lab, like a mid-level AI company), earn $150,000/year, contribute via earning to give plus whatever direct work is feasible there.
All else equal, which option would you tentatively expect to have more impact on reducing AI s-risks, and why? If unsure, you could also share your key uncertainties/cruxes about this question.
Final Statements:
I also want to explicitly lower the reply bar. Don’t worry about low-confidence responses. I’m not treating replies as vote counts or expecting one single comment to settle the question. I hope to surface as many potentially relevant considerations as possible for later examination and refinement.
Thus, raising a weak point does no harm—if it doesn’t hold, it can be discarded by future red-teaming. But if an important consideration is never raised, that’s a bigger loss. So, feel free to spend only 1 minute sharing rough impressions, partial arguments, or low-confidence guesses are all welcome.
Also, feel free to DM me directly via forum message if you prefer non-public comments.
Thank you really much for reading and answering.

Hello Kestrel,
Thanks for your reply. I'm really grateful for it. I’d like to ask whether you think your view that “earning to give more would generally be better than direct work” would still hold 5–10 years from now. By that time, we may have discovered effective AI s-risk interventions that could be implemented in the non-EA world (even if they involve a high safety tax).