- FTX, a big source of EA funding, has imploded.
- There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical.
- There's been a big drop in the funding that EA organisations expect to receive over the next few years.
- Because these organisations were acting under false information, they would've made (ex-post) wrong decisions, which they will now need to revise.
Which revisions are most pressing?
It's a process to recruit billionaires/turn EAs into billionaires, but one estimate was another 3.5 EA billionaires by 2027 (written pre FTX implosion). In the analyses I've seen for last dollar cost effectiveness, they have tended to ignore the possibility of EA adding funds over time. Of course we don't want to run out of money just when we need some big surge. But we could spend a lot of money in the next five years and then reevaluate if we have not recruited significant additional assets. This could make a lot of sense for people with short AI timelines (see here for an interesting model) or for people who are worried about the current nuclear risk. But more generally, by doing more things now, we can show concrete results, which I think would be helpful in recruiting additional funds. I may be biased as I head ALLFED, but I think the optimal course of action for the long-term future is to maintain the funding rate that was occurring in 2022, and likely even increase it.