The LTFF recently switched to doing grant rounds, our first round closes on Saturday (deadline EOD anywhere 2025-Feb-15). I think you should consider submitting a quick application in the next 24 hours. We will likely consider applications submitted over the next few days in this round (unless we are overwhelmed with applications).
Apply nowIn my personal view, I don't think there has been a better time to work on AI safety projects than right now. There is a clear-ish set of priorities, funders willing to pay for projects, and an increasing sense from the AI safety community that we might be close to the critical window for ensuring AI systems have a profoundly positive effect on society.[1]
I am particularly keen to see applications on:
- publicly communicating AI threat models and other societal implications
- securing AI systems in ways I don't expect to be done by default in labs
- getting useful safety research out of AI systems when the AI is powerful and scheming against you
- analysis of AI safety research agendas that might be especially good candidates for AIs (e.g. because they can be easily decomposed into subquestions that are easily checkable)
- new organisations that could use seed funding
- gatherings of various sizes and stakeholders for navigating the transition to powerful AI systems
- neglected technical AI governance research and fieldbuilding programs
- career transition grants for anyone thinking of the above
- areas that Open Philanthropy recently divested from
Other LTFF fund managers are excited about other areas and an area not being included in the list above is not a strong indicator that we aren't excited about it.
You can apply to the round here (deadline EOD anywhere 2025-Feb-15).
- ^
we are also interested in funding other longtermist areas, though empirically they meet our bar much less often than AI safety areas.
A few things could be useful:
- An overview of funders in the space/ new funders and how things have changed recently.
- How changing priorities has or hasn't changed the profile of the people applying for LTFF funding
- Where the biggest opportunities (some combination of importance and availability of funding) are for people who might be reading a forum post.
Also you can always post during draft amnesty and use the built in excuse to not respond to comments (i.e. we have a table you can put at the top of your post where you can say "I only endorse a weak version of this claim and I probably won't be responding to comments" or something to that effect)