Would be curious which top recommendations people have in the areas of xrisk and ai safety? Have donated to and considered:
- https://funds.effectivealtruism.org/funds/far-future
- https://existence.org/
- https://intelligence.org/
- https://futureoflife.org/
- https://www.nti.org/
- https://www.cser.ac.uk/
Any other top choices that seem potentially more underfunded and impactful? Feel free to share your own effort but state it as such, or other conflicts of interest
Rethink Priorities' AI Governance & Strategy team (which I co-lead) has room for more funding. There's some info about our work and the work of RP's other x-risk-focused team* here and elsewhere in that post. One piece of public work by us so far is Understanding the diffusion of large language models: summary. We also have a lot of work that's unfortunately not public, either because it's still in progress or e.g. due to information hazards. I could share some more info via a DM if you want.
We also have yet to release a thorough public overview of the team, but we aim to do so in the coming months.
(*That other team - the General Longtermism team - may also be interested in funding, but I don't want to speak for them. I could probably connect you with them if you want.)
Glad to hear that!
Oh also, just noticed I forgot to add info on how to donate, in case you or others are interested - that info can be found at https://rethinkpriorities.org/donate